Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Add classes_ to classifier attributes #12509

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions sklearn/calibration.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,8 +83,8 @@ class CalibratedClassifierCV(BaseEstimator, ClassifierMixin):

Attributes
----------
classes_ : array, shape (n_classes)
The class labels.
classes_ : array, shape = (n_classes,)
Class labels.

calibrated_classifiers_ : list (len() equal to cv or 1 if cv == "prefit")
The list of calibrated classifiers, one for each crossvalidation fold,
Expand Down
3 changes: 3 additions & 0 deletions sklearn/discriminant_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -594,6 +594,9 @@ class QuadraticDiscriminantAnalysis(BaseEstimator, ClassifierMixin):
of the Gaussian distributions along its principal axes, i.e. the
variance in the rotated coordinate system.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
Expand Down
3 changes: 3 additions & 0 deletions sklearn/ensemble/gradient_boosting.py
Original file line number Diff line number Diff line change
Expand Up @@ -1905,6 +1905,9 @@ class GradientBoostingClassifier(BaseGradientBoosting, ClassifierMixin):
The collection of fitted sub-estimators. ``loss_.K`` is 1 for binary
classification, otherwise n_classes.

classes_ : array, shape = (n_classes,)
Class labels.

Notes
-----
The features are always randomly permuted at each split. Therefore,
Expand Down
6 changes: 6 additions & 0 deletions sklearn/linear_model/logistic.py
Original file line number Diff line number Diff line change
Expand Up @@ -1167,6 +1167,9 @@ class LogisticRegression(BaseEstimator, LinearClassifierMixin,
In SciPy <= 1.0.0 the number of lbfgs iterations may exceed
``max_iter``. ``n_iter_`` will now report at most ``max_iter``.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> from sklearn.datasets import load_iris
Expand Down Expand Up @@ -1638,6 +1641,9 @@ class LogisticRegressionCV(LogisticRegression, BaseEstimator,
Actual number of iterations for all classes, folds and Cs.
In the binary or multinomial cases, the first dimension is equal to 1.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> from sklearn.datasets import load_iris
Expand Down
3 changes: 3 additions & 0 deletions sklearn/linear_model/passive_aggressive.py
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,9 @@ class PassiveAggressiveClassifier(BaseSGDClassifier):
The actual number of iterations to reach the stopping criterion.
For multiclass fits, it is the maximum over every binary fit.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> from sklearn.linear_model import PassiveAggressiveClassifier
Expand Down
3 changes: 3 additions & 0 deletions sklearn/linear_model/perceptron.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,9 @@ class Perceptron(BaseSGDClassifier):
The actual number of iterations to reach the stopping criterion.
For multiclass fits, it is the maximum over every binary fit.

classes_ : array, shape = (n_classes,)
Class labels.

Notes
-----

Expand Down
6 changes: 6 additions & 0 deletions sklearn/linear_model/ridge.py
Original file line number Diff line number Diff line change
Expand Up @@ -775,6 +775,9 @@ class RidgeClassifier(LinearClassifierMixin, _BaseRidge):
Independent term in decision function. Set to 0.0 if
``fit_intercept = False``.

classes_ : array, shape = (n_classes,)
Class labels.

n_iter_ : array or None, shape (n_targets,)
Actual number of iterations for each target. Available only for
sag and lsqr solvers. Other solvers will return None.
Expand Down Expand Up @@ -1358,6 +1361,9 @@ class RidgeClassifierCV(LinearClassifierMixin, _BaseRidgeCV):
Independent term in decision function. Set to 0.0 if
``fit_intercept = False``.

classes_ : array, shape = (n_classes,)
Class labels.

alpha_ : float
Estimated regularization parameter

Expand Down
3 changes: 3 additions & 0 deletions sklearn/linear_model/stochastic_gradient.py
Original file line number Diff line number Diff line change
Expand Up @@ -938,6 +938,9 @@ class SGDClassifier(BaseSGDClassifier):
intercept_ : array, shape (1,) if n_classes == 2 else (n_classes,)
Constants in decision function.

classes_ : array, shape = (n_classes,)
Class labels.

n_iter_ : int
The actual number of iterations to reach the stopping criterion.
For multiclass fits, it is the maximum over every binary fit.
Expand Down
12 changes: 12 additions & 0 deletions sklearn/naive_bayes.py
Original file line number Diff line number Diff line change
Expand Up @@ -143,6 +143,9 @@ class GaussianNB(BaseNB):
epsilon_ : float
absolute additive value to variances

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> import numpy as np
Expand Down Expand Up @@ -676,6 +679,9 @@ class MultinomialNB(BaseDiscreteNB):
during fitting. This value is weighted by the sample weight when
provided.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> import numpy as np
Expand Down Expand Up @@ -777,6 +783,9 @@ class ComplementNB(BaseDiscreteNB):
Number of samples encountered for each feature during fitting. This
value is weighted by the sample weight when provided.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> import numpy as np
Expand Down Expand Up @@ -878,6 +887,9 @@ class BernoulliNB(BaseDiscreteNB):
during fitting. This value is weighted by the sample weight when
provided.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> import numpy as np
Expand Down
10 changes: 10 additions & 0 deletions sklearn/neighbors/classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,11 @@ class KNeighborsClassifier(NeighborsBase, KNeighborsMixin,
for more details.
Doesn't affect :meth:`fit` method.

Attributes
----------
classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> X = [[0], [1], [2], [3]]
Expand Down Expand Up @@ -296,6 +301,11 @@ class RadiusNeighborsClassifier(NeighborsBase, RadiusNeighborsMixin,
``-1`` means using all processors. See :term:`Glossary <n_jobs>`
for more details.

Attributes
----------
classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> X = [[0], [1], [2], [3]]
Expand Down
3 changes: 3 additions & 0 deletions sklearn/neighbors/nearest_centroid.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,9 @@ class NearestCentroid(BaseEstimator, ClassifierMixin):
centroids_ : array-like, shape = [n_classes, n_features]
Centroid of each class

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> from sklearn.neighbors.nearest_centroid import NearestCentroid
Expand Down
4 changes: 2 additions & 2 deletions sklearn/semi_supervised/label_propagation.py
Original file line number Diff line number Diff line change
Expand Up @@ -340,7 +340,7 @@ class LabelPropagation(BaseLabelPropagation):
X_ : array, shape = [n_samples, n_features]
Input array.

classes_ : array, shape = [n_classes]
classes_ : array, shape = (n_classes,)
The distinct labels used in classifying instances.

label_distributions_ : array, shape = [n_samples, n_classes]
Expand Down Expand Up @@ -454,7 +454,7 @@ class LabelSpreading(BaseLabelPropagation):
X_ : array, shape = [n_samples, n_features]
Input array.

classes_ : array, shape = [n_classes]
classes_ : array, shape = (n_classes,)
The distinct labels used in classifying instances.

label_distributions_ : array, shape = [n_samples, n_classes]
Expand Down
9 changes: 9 additions & 0 deletions sklearn/svm/classes.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,6 +111,9 @@ class LinearSVC(BaseEstimator, LinearClassifierMixin,
intercept_ : array, shape = [1] if n_classes == 2 else [n_classes]
Constants in decision function.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> from sklearn.svm import LinearSVC
Expand Down Expand Up @@ -556,6 +559,9 @@ class SVC(BaseSVC):
fit_status_ : int
0 if correctly fitted, 1 otherwise (will raise warning)

classes_ : array, shape = (n_classes,)
Class labels.

probA_ : array, shape = [n_class * (n_class-1) / 2]
probB_ : array, shape = [n_class * (n_class-1) / 2]
If probability=True, the parameters learned in Platt scaling to
Expand Down Expand Up @@ -737,6 +743,9 @@ class NuSVC(BaseSVC):
intercept_ : array, shape = [n_class * (n_class-1) / 2]
Constants in decision function.

classes_ : array, shape = (n_classes,)
Class labels.

Examples
--------
>>> import numpy as np
Expand Down
13 changes: 13 additions & 0 deletions sklearn/tests/test_docstring_parameters.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@
from sklearn.utils.testing import check_docstring_parameters
from sklearn.utils.testing import _get_func_name
from sklearn.utils.testing import ignore_warnings
from sklearn.utils.testing import all_estimators
from sklearn.utils.deprecation import _is_deprecated

import pytest
Expand Down Expand Up @@ -144,3 +145,15 @@ def test_tabs():
assert '\t' not in source, ('"%s" has tabs, please remove them ',
'or add it to theignore list'
% modname)


@pytest.mark.parametrize('name, Classifier',
all_estimators(type_filter='classifier'))
def test_classifier_docstring_attributes(name, Classifier):
pytest.importorskip('numpydoc')
from numpydoc import docscrape

doc = docscrape.ClassDoc(Classifier)
attributes = doc['Attributes']
assert attributes
assert any(['classes_' in att[0] for att in attributes])
5 changes: 5 additions & 0 deletions sklearn/tree/tree.py
Original file line number Diff line number Diff line change
Expand Up @@ -1291,6 +1291,11 @@ class ExtraTreeClassifier(DecisionTreeClassifier):
Note that these weights will be multiplied with sample_weight (passed
through the fit method) if sample_weight is specified.

Attributes
----------
classes_ : array, shape = (n_classes,)
Class labels.

See also
--------
ExtraTreeRegressor, sklearn.ensemble.ExtraTreesClassifier,
Expand Down