Closed
Description
Describe the bug
I cloned and installed the current main branch from the repository and ran pytest sklearn
out of curiosity and encountered a ValueError for the function sklearn.externals._lobpcg.lobpcg
. Further examination revealed that it occured in 'pytest -vl sklearn/tests/test_docstrings.py -k test_docstring'.
I suggest to either fix the docstring or temporarly add it to FUNCTION_DOCSTRING_IGNORE_LIST
Steps/Code to Reproduce
pytest -vl sklearn/tests/test_docstrings.py -k test_docstring
Expected Results
No error is thrown.
Actual Results
========================================================== FAILURES ===========================================================
__________________________________ test_function_docstring[sklearn.externals._lobpcg.lobpcg] __________________________________
function_name = 'sklearn.externals._lobpcg.lobpcg'
request = <FixtureRequest for <Function test_function_docstring[sklearn.externals._lobpcg.lobpcg]>>
@pytest.mark.parametrize("function_name", get_all_functions_names())
def test_function_docstring(function_name, request):
"""Check function docstrings using numpydoc."""
if function_name in FUNCTION_DOCSTRING_IGNORE_LIST:
request.applymarker(
pytest.mark.xfail(run=False, reason="TODO pass numpydoc validation")
)
res = numpydoc_validation.validate(function_name)
res["errors"] = list(filter_errors(res["errors"], method="function"))
if res["errors"]:
msg = repr_errors(res, method=f"Tested function: {function_name}")
> raise ValueError(msg)
E ValueError:
E
E /home/fabian/Projects/scikit-learn/sklearn/externals/_lobpcg.py
E
E Tested function: sklearn.externals._lobpcg.lobpcg
E
E Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)
E
E LOBPCG is a preconditioned eigensolver for large symmetric positive
E definite (SPD) generalized eigenproblems.
E
E Parameters
E ----------
E A : {sparse matrix, dense matrix, LinearOperator}
E The symmetric linear operator of the problem, usually a
E sparse matrix. Often called the "stiffness matrix".
E X : ndarray, float32 or float64
E Initial approximation to the ``k`` eigenvectors (non-sparse). If `A`
E has ``shape=(n,n)`` then `X` should have shape ``shape=(n,k)``.
E B : {dense matrix, sparse matrix, LinearOperator}, optional
E The right hand side operator in a generalized eigenproblem.
E By default, ``B = Identity``. Often called the "mass matrix".
E M : {dense matrix, sparse matrix, LinearOperator}, optional
E Preconditioner to `A`; by default ``M = Identity``.
E `M` should approximate the inverse of `A`.
E Y : ndarray, float32 or float64, optional
E n-by-sizeY matrix of constraints (non-sparse), sizeY < n
E The iterations will be performed in the B-orthogonal complement
E of the column-space of Y. Y must be full rank.
E tol : scalar, optional
E Solver tolerance (stopping criterion).
E The default is ``tol=n*sqrt(eps)``.
E maxiter : int, optional
E Maximum number of iterations. The default is ``maxiter = 20``.
E largest : bool, optional
E When True, solve for the largest eigenvalues, otherwise the smallest.
E verbosityLevel : int, optional
E Controls solver output. The default is ``verbosityLevel=0``.
E retLambdaHistory : bool, optional
E Whether to return eigenvalue history. Default is False.
E retResidualNormsHistory : bool, optional
E Whether to return history of residual norms. Default is False.
E
E Returns
E -------
E w : ndarray
E Array of ``k`` eigenvalues
E v : ndarray
E An array of ``k`` eigenvectors. `v` has the same shape as `X`.
E lambdas : list of ndarray, optional
E The eigenvalue history, if `retLambdaHistory` is True.
E rnorms : list of ndarray, optional
E The history of residual norms, if `retResidualNormsHistory` is True.
E
E Notes
E -----
E If both ``retLambdaHistory`` and ``retResidualNormsHistory`` are True,
E the return tuple has the following format
E ``(lambda, V, lambda history, residual norms history)``.
E
E In the following ``n`` denotes the matrix size and ``m`` the number
E of required eigenvalues (smallest or largest).
E
E The LOBPCG code internally solves eigenproblems of the size ``3m`` on every
E iteration by calling the "standard" dense eigensolver, so if ``m`` is not
E small enough compared to ``n``, it does not make sense to call the LOBPCG
E code, but rather one should use the "standard" eigensolver, e.g. numpy or
E scipy function in this case.
E If one calls the LOBPCG algorithm for ``5m > n``, it will most likely break
E internally, so the code tries to call the standard function instead.
E
E It is not that ``n`` should be large for the LOBPCG to work, but rather the
E ratio ``n / m`` should be large. It you call LOBPCG with ``m=1``
E and ``n=10``, it works though ``n`` is small. The method is intended
E for extremely large ``n / m``.
E
E The convergence speed depends basically on two factors:
E
E 1. How well relatively separated the seeking eigenvalues are from the rest
E of the eigenvalues. One can try to vary ``m`` to make this better.
E
E 2. How well conditioned the problem is. This can be changed by using proper
E preconditioning. For example, a rod vibration test problem (under tests
E directory) is ill-conditioned for large ``n``, so convergence will be
E slow, unless efficient preconditioning is used. For this specific
E problem, a good simple preconditioner function would be a linear solve
E for `A`, which is easy to code since A is tridiagonal.
E
E References
E ----------
E .. [1] A. V. Knyazev (2001),
E Toward the Optimal Preconditioned Eigensolver: Locally Optimal
E Block Preconditioned Conjugate Gradient Method.
E SIAM Journal on Scientific Computing 23, no. 2,
E pp. 517-541. :doi:`10.1137/S1064827500366124`
E
E .. [2] A. V. Knyazev, I. Lashuk, M. E. Argentati, and E. Ovchinnikov
E (2007), Block Locally Optimal Preconditioned Eigenvalue Xolvers
E (BLOPEX) in hypre and PETSc. :arxiv:`0705.2626`
E
E .. [3] A. V. Knyazev's C and MATLAB implementations:
E https://github.com/lobpcg/blopex
E
E Examples
E --------
E
E Solve ``A x = lambda x`` with constraints and preconditioning.
E
E >>> import numpy as np
E >>> from scipy.sparse import spdiags, issparse
E >>> from scipy.sparse.linalg import lobpcg, LinearOperator
E >>> n = 100
E >>> vals = np.arange(1, n + 1)
E >>> A = spdiags(vals, 0, n, n)
E >>> A.toarray()
E array([[ 1., 0., 0., ..., 0., 0., 0.],
E [ 0., 2., 0., ..., 0., 0., 0.],
E [ 0., 0., 3., ..., 0., 0., 0.],
E ...,
E [ 0., 0., 0., ..., 98., 0., 0.],
E [ 0., 0., 0., ..., 0., 99., 0.],
E [ 0., 0., 0., ..., 0., 0., 100.]])
E
E Constraints:
E
E >>> Y = np.eye(n, 3)
E
E Initial guess for eigenvectors, should have linearly independent
E columns. Column dimension = number of requested eigenvalues.
E
E >>> rng = np.random.default_rng()
E >>> X = rng.random((n, 3))
E
E Preconditioner in the inverse of A in this example:
E
E >>> invA = spdiags([1./vals], 0, n, n)
E
E The preconditiner must be defined by a function:
E
E >>> def precond( x ):
E ... return invA @ x
E
E The argument x of the preconditioner function is a matrix inside `lobpcg`,
E thus the use of matrix-matrix product ``@``.
E
E The preconditioner function is passed to lobpcg as a `LinearOperator`:
E
E >>> M = LinearOperator(matvec=precond, matmat=precond,
E ... shape=(n, n), dtype=np.float64)
E
E Let us now solve the eigenvalue problem for the matrix A:
E
E >>> eigenvalues, _ = lobpcg(A, X, Y=Y, M=M, largest=False)
E >>> eigenvalues
E array([4., 5., 6.])
E
E Note that the vectors passed in Y are the eigenvectors of the 3 smallest
E eigenvalues. The results returned are orthogonal to those.
E
E # Errors
E
E - GL03: Double line break found; please use only one blank line to separate sections or paragraphs, and do not leave blank lines at the end of docstrings
E - SS03: Summary does not end with a period
E - PR08: Parameter "Y" description should start with a capital letter
E - RT05: Return value description should finish with "."
function_name = 'sklearn.externals._lobpcg.lobpcg'
msg = '\n\n/home/fabian/Projects/scikit-learn/sklearn/externals/_lobpcg.py\n\nTested function: sklearn.externals._lobpcg.lob...Parameter "Y" description should start with a capital letter\n - RT05: Return value description should finish with "."'
request = <FixtureRequest for <Function test_function_docstring[sklearn.externals._lobpcg.lobpcg]>>
res = {'deprecated': False, 'docstring': 'Locally Optimal Block Preconditioned Conjugate Gradient Method (LOBPCG)\n\nLOBPCG ... description should finish with "."')], 'file': '/home/fabian/Projects/scikit-learn/sklearn/externals/_lobpcg.py', ...}
sklearn/tests/test_docstrings.py:296: ValueError
================================ 1 failed, 2032 passed, 97 xfailed, 4 xpassed in 11.19 seconds ================================
Versions
System:
python: 3.8.13 (default, Mar 28 2022, 11:38:47) [GCC 7.5.0]
executable: /home/fabian/anaconda3/envs/sklearn-dev/bin/python3
machine: Linux-5.13.0-48-generic-x86_64-with-glibc2.17
Python dependencies:
sklearn: 1.2.dev0
pip: 21.2.4
setuptools: 61.2.0
numpy: 1.17.3
scipy: 1.3.2
Cython: None
pandas: None
matplotlib: None
joblib: 1.0.0
threadpoolctl: 2.0.0
Built with OpenMP: True
threadpoolctl info:
filepath: /home/fabian/anaconda3/envs/sklearn-dev/lib/libgomp.so.1.0.0
prefix: libgomp
user_api: openmp
internal_api: openmp
version: None
num_threads: 12
filepath: /home/fabian/anaconda3/envs/sklearn-dev/lib/python3.8/site-packages/numpy/.libs/libopenblasp-r0-34a18dc3.3.7.so
prefix: libopenblas
user_api: blas
internal_api: openblas
version: 0.3.7
num_threads: 12
threading_layer: pthreads
filepath: /home/fabian/anaconda3/envs/sklearn-dev/lib/python3.8/site-packages/scipy/.libs/libopenblasp-r0-2ecf47d5.3.7.dev.so
prefix: libopenblas
user_api: blas
internal_api: openblas
version: 0.3.7.dev
num_threads: 12
threading_layer: pthreads