Thanks to visit codestin.com
Credit goes to github.com

Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 0 additions & 1 deletion maint_tools/test_docstrings.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@
"GaussianRandomProjection",
"GradientBoostingClassifier",
"GradientBoostingRegressor",
"GraphicalLassoCV",
"GridSearchCV",
"HalvingGridSearchCV",
"HalvingRandomSearchCV",
Expand Down
41 changes: 22 additions & 19 deletions sklearn/covariance/_graph_lasso.py
Original file line number Diff line number Diff line change
Expand Up @@ -637,7 +637,7 @@ class GraphicalLassoCV(GraphicalLasso):
stable.

n_jobs : int, default=None
number of jobs to run in parallel.
Number of jobs to run in parallel.
``None`` means 1 unless in a :obj:`joblib.parallel_backend` context.
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.
Expand Down Expand Up @@ -710,6 +710,24 @@ class GraphicalLassoCV(GraphicalLasso):

.. versionadded:: 0.24

See Also
--------
graphical_lasso : L1-penalized covariance estimator.
GraphicalLasso : Sparse inverse covariance with
cross-validated choice of the l1 penalty.

Notes
-----
The search for the optimal penalization parameter (alpha) is done on an
iteratively refined grid: first the cross-validated scores on a grid are
computed, then a new refined grid is centered around the maximum, and so
on.

One of the challenges which is faced here is that the solvers can
fail to converge to a well-conditioned estimate. The corresponding
values of alpha then come out as missing values, but the optimum may
be close to these missing values.

Examples
--------
>>> import numpy as np
Expand All @@ -730,22 +748,6 @@ class GraphicalLassoCV(GraphicalLasso):
[0.017, 0.036, 0.094, 0.69 ]])
>>> np.around(cov.location_, decimals=3)
array([0.073, 0.04 , 0.038, 0.143])

See Also
--------
graphical_lasso, GraphicalLasso

Notes
-----
The search for the optimal penalization parameter (alpha) is done on an
iteratively refined grid: first the cross-validated scores on a grid are
computed, then a new refined grid is centered around the maximum, and so
on.

One of the challenges which is faced here is that the solvers can
fail to converge to a well-conditioned estimate. The corresponding
values of alpha then come out as missing values, but the optimum may
be close to these missing values.
"""

def __init__(
Expand Down Expand Up @@ -776,19 +778,20 @@ def __init__(
self.n_jobs = n_jobs

def fit(self, X, y=None):
"""Fits the GraphicalLasso covariance model to X.
"""Fit the GraphicalLasso covariance model to X.

Parameters
----------
X : array-like of shape (n_samples, n_features)
Data from which to compute the covariance estimate
Data from which to compute the covariance estimate.

y : Ignored
Not used, present for API consistency by convention.

Returns
-------
self : object
Returns the instance itself.
"""
# Covariance does not make sense for a single feature
X = self._validate_data(X, ensure_min_features=2, estimator=self)
Expand Down