Thanks to visit codestin.com
Credit goes to github.com

Skip to content

MNT Removed deprecated attributes and parameters -- ctnd #15804

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 27 commits into from
Dec 13, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
5dd561a
removed warn_on_dtype
NicolasHug Dec 5, 2019
e05e17a
removed parameters to check_is_fitted
NicolasHug Dec 5, 2019
cdfac1e
all_estimators parameters
NicolasHug Dec 5, 2019
ef5d570
deprecated n_components attribute in AgglomerativeClustering
NicolasHug Dec 5, 2019
5485edb
Merge branch 'master' of github.com:scikit-learn/scikit-learn into de…
NicolasHug Dec 5, 2019
6671682
change default of base.score for multioutput
NicolasHug Dec 5, 2019
38b97cb
Merge branch 'multioutput_dep' into dep023
NicolasHug Dec 5, 2019
b5fe811
removed lots of useless decorators?
NicolasHug Dec 5, 2019
5304343
changed default of copy in quantil_transform
NicolasHug Dec 5, 2019
226db87
removed six.py
NicolasHug Dec 5, 2019
53f9ecc
nmf default value of init param
NicolasHug Dec 5, 2019
d80940a
raise error instead of warning in LinearDiscriminantAnalysis
NicolasHug Dec 5, 2019
16b3c9c
removed label param in hamming_loss
NicolasHug Dec 5, 2019
7af6207
updated method parameter of power_transform
NicolasHug Dec 5, 2019
808ab05
pep8
NicolasHug Dec 5, 2019
0d574a0
changed default value of min_impurity_split
NicolasHug Dec 5, 2019
5a4c2d5
removed assert_false and assert_true
NicolasHug Dec 5, 2019
887edd7
Merge branch 'master' of github.com:scikit-learn/scikit-learn into de…
NicolasHug Dec 9, 2019
04ec379
added and fixed versionchanged directives
NicolasHug Dec 9, 2019
015ad40
reset min_impurity_split default to None
NicolasHug Dec 9, 2019
e6443a5
fixed LDA issue
NicolasHug Dec 9, 2019
09bf4e5
fixed some test
NicolasHug Dec 9, 2019
1fae94f
more docstrings updates
NicolasHug Dec 9, 2019
43fea84
set min_impurity_decrease for test to pass
NicolasHug Dec 9, 2019
7cd20a0
upate docstring example
NicolasHug Dec 9, 2019
7fb0872
fixed doctest
NicolasHug Dec 9, 2019
dec2847
Merge branch 'master' of github.com:scikit-learn/scikit-learn into de…
NicolasHug Dec 11, 2019
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/modules/ensemble.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1323,7 +1323,7 @@ computationally expensive.
StackingRegressor(...)
>>> print('R2 score: {:.2f}'
... .format(multi_layer_regressor.score(X_test, y_test)))
R2 score: 0.82
R2 score: 0.83

.. topic:: References

Expand Down
17 changes: 5 additions & 12 deletions sklearn/decomposition/_nmf.py
Original file line number Diff line number Diff line change
Expand Up @@ -842,7 +842,7 @@ def _fit_multiplicative_update(X, W, H, beta_loss='frobenius',


def non_negative_factorization(X, W=None, H=None, n_components=None,
init='warn', update_H=True, solver='cd',
init=None, update_H=True, solver='cd',
beta_loss='frobenius', tol=1e-4,
max_iter=200, alpha=0., l1_ratio=0.,
regularization=None, random_state=None,
Expand Down Expand Up @@ -891,10 +891,7 @@ def non_negative_factorization(X, W=None, H=None, n_components=None,

init : None | 'random' | 'nndsvd' | 'nndsvda' | 'nndsvdar' | 'custom'
Method used to initialize the procedure.
Default: 'random'.

The default value will change from 'random' to None in version 0.23
to make it consistent with decomposition.NMF.
Default: None.

Valid options:

Expand All @@ -915,6 +912,9 @@ def non_negative_factorization(X, W=None, H=None, n_components=None,

- 'custom': use custom matrices W and H

.. versionchanged:: 0.23
The default value of `init` changed from 'random' to None in 0.23.

update_H : boolean, default: True
Set to True, both W and H will be estimated from initial guesses.
Set to False, only W will be estimated.
Expand Down Expand Up @@ -1028,13 +1028,6 @@ def non_negative_factorization(X, W=None, H=None, n_components=None,
raise ValueError("Tolerance for stopping criteria must be "
"positive; got (tol=%r)" % tol)

if init == "warn":
if n_components < n_features:
warnings.warn("The default value of init will change from "
"random to None in 0.23 to make it consistent "
"with decomposition.NMF.", FutureWarning)
init = "random"

# check W and H, or initialize them
if init == 'custom' and update_H:
_check_init(H, (n_components, n_features), "NMF (input H)")
Expand Down
4 changes: 0 additions & 4 deletions sklearn/decomposition/tests/test_nmf.py
Original file line number Diff line number Diff line change
Expand Up @@ -224,10 +224,6 @@ def test_non_negative_factorization_checking():
A = np.ones((2, 2))
# Test parameters checking is public function
nnmf = non_negative_factorization
msg = ("The default value of init will change from "
"random to None in 0.23 to make it consistent "
"with decomposition.NMF.")
assert_warns_message(FutureWarning, msg, nnmf, A, A, A, np.int64(1))
msg = ("Number of components must be a positive integer; "
"got (n_components=1.5)")
assert_raise_message(ValueError, msg, nnmf, A, A, A, 1.5, 'random')
Expand Down
19 changes: 4 additions & 15 deletions sklearn/discriminant_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -423,7 +423,6 @@ def fit(self, X, y):
y : array, shape (n_samples,)
Target values.
"""
# FIXME: Future warning to be removed in 0.23
X, y = check_X_y(X, y, ensure_min_samples=2, estimator=self,
dtype=[np.float64, np.float32])
self.classes_ = unique_labels(y)
Expand Down Expand Up @@ -455,21 +454,11 @@ def fit(self, X, y):
self._max_components = max_components
else:
if self.n_components > max_components:
warnings.warn(
raise ValueError(
"n_components cannot be larger than min(n_features, "
"n_classes - 1). Using min(n_features, "
"n_classes - 1) = min(%d, %d - 1) = %d components."
% (X.shape[1], len(self.classes_), max_components),
ChangedBehaviorWarning)
future_msg = ("In version 0.23, setting n_components > min("
"n_features, n_classes - 1) will raise a "
"ValueError. You should set n_components to None"
" (default), or a value smaller or equal to "
"min(n_features, n_classes - 1).")
warnings.warn(future_msg, FutureWarning)
self._max_components = max_components
else:
self._max_components = self.n_components
"n_classes - 1)."
)
self._max_components = self.n_components

if self.solver == 'svd':
if self.shrinkage is not None:
Expand Down
20 changes: 10 additions & 10 deletions sklearn/ensemble/_forest.py
Original file line number Diff line number Diff line change
Expand Up @@ -935,14 +935,14 @@ class RandomForestClassifier(ForestClassifier):

.. versionadded:: 0.19

min_impurity_split : float, (default=1e-7)
min_impurity_split : float, (default=0)
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19. The default value of
``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it
``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it
will be removed in 0.25. Use ``min_impurity_decrease`` instead.


Expand Down Expand Up @@ -1253,14 +1253,14 @@ class RandomForestRegressor(ForestRegressor):

.. versionadded:: 0.19

min_impurity_split : float, (default=1e-7)
min_impurity_split : float, (default=0)
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19. The default value of
``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it
``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it
will be removed in 0.25. Use ``min_impurity_decrease`` instead.

bootstrap : boolean, optional (default=True)
Expand Down Expand Up @@ -1530,14 +1530,14 @@ class ExtraTreesClassifier(ForestClassifier):

.. versionadded:: 0.19

min_impurity_split : float, (default=1e-7)
min_impurity_split : float, (default=0)
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19. The default value of
``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it
``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it
will be removed in 0.25. Use ``min_impurity_decrease`` instead.

bootstrap : boolean, optional (default=False)
Expand Down Expand Up @@ -1840,14 +1840,14 @@ class ExtraTreesRegressor(ForestRegressor):

.. versionadded:: 0.19

min_impurity_split : float, (default=1e-7)
min_impurity_split : float, (default=0)
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19. The default value of
``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it
``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it
will be removed in 0.25. Use ``min_impurity_decrease`` instead.

bootstrap : boolean, optional (default=False)
Expand Down Expand Up @@ -2078,14 +2078,14 @@ class RandomTreesEmbedding(BaseForest):

.. versionadded:: 0.19

min_impurity_split : float, (default=1e-7)
min_impurity_split : float, (default=0)
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19. The default value of
``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it
``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it
will be removed in 0.25. Use ``min_impurity_decrease`` instead.

sparse_output : bool, optional (default=True)
Expand Down
8 changes: 4 additions & 4 deletions sklearn/ensemble/_gb.py
Original file line number Diff line number Diff line change
Expand Up @@ -868,14 +868,14 @@ class GradientBoostingClassifier(ClassifierMixin, BaseGradientBoosting):

.. versionadded:: 0.19

min_impurity_split : float, (default=1e-7)
min_impurity_split : float, (default=0)
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19. The default value of
``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it
``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it
will be removed in 0.25. Use ``min_impurity_decrease`` instead.

init : estimator or 'zero', optional (default=None)
Expand Down Expand Up @@ -1340,14 +1340,14 @@ class GradientBoostingRegressor(RegressorMixin, BaseGradientBoosting):

.. versionadded:: 0.19

min_impurity_split : float, (default=1e-7)
min_impurity_split : float, (default=0)
Threshold for early stopping in tree growth. A node will split
if its impurity is above the threshold, otherwise it is a leaf.

.. deprecated:: 0.19
``min_impurity_split`` has been deprecated in favor of
``min_impurity_decrease`` in 0.19. The default value of
``min_impurity_split`` will change from 1e-7 to 0 in 0.23 and it
``min_impurity_split`` has changed from 1e-7 to 0 in 0.23 and it
will be removed in 0.25. Use ``min_impurity_decrease`` instead.

init : estimator or 'zero', optional (default=None)
Expand Down
5 changes: 3 additions & 2 deletions sklearn/ensemble/tests/test_gradient_boosting.py
Original file line number Diff line number Diff line change
Expand Up @@ -1170,9 +1170,10 @@ def test_non_uniform_weights_toy_edge_case_clf():

def check_sparse_input(EstimatorClass, X, X_sparse, y):
dense = EstimatorClass(n_estimators=10, random_state=0,
max_depth=2).fit(X, y)
max_depth=2, min_impurity_decrease=1e-7).fit(X, y)
sparse = EstimatorClass(n_estimators=10, random_state=0,
max_depth=2).fit(X_sparse, y)
max_depth=2,
min_impurity_decrease=1e-7).fit(X_sparse, y)

assert_array_almost_equal(sparse.apply(X), dense.apply(X))
assert_array_almost_equal(sparse.predict(X), dense.predict(X))
Expand Down
19 changes: 1 addition & 18 deletions sklearn/metrics/_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -1986,7 +1986,7 @@ class 2 1.00 0.67 0.80 3
return report


def hamming_loss(y_true, y_pred, labels=None, sample_weight=None):
def hamming_loss(y_true, y_pred, sample_weight=None):
"""Compute the average Hamming loss.

The Hamming loss is the fraction of labels that are incorrectly predicted.
Expand All @@ -2001,17 +2001,6 @@ def hamming_loss(y_true, y_pred, labels=None, sample_weight=None):
y_pred : 1d array-like, or label indicator array / sparse matrix
Predicted labels, as returned by a classifier.

labels : array, shape = [n_labels], optional (default='deprecated')
Integer array of labels. If not provided, labels will be inferred
from y_true and y_pred.

.. versionadded:: 0.18
.. deprecated:: 0.21
This parameter ``labels`` is deprecated in version 0.21 and will
be removed in version 0.23. Hamming loss uses ``y_true.shape[1]``
for the number of labels when y_true is binary label indicators,
so it is unnecessary for the user to specify.

sample_weight : array-like of shape (n_samples,), default=None
Sample weights.

Expand Down Expand Up @@ -2071,12 +2060,6 @@ def hamming_loss(y_true, y_pred, labels=None, sample_weight=None):
y_type, y_true, y_pred = _check_targets(y_true, y_pred)
check_consistent_length(y_true, y_pred, sample_weight)

if labels is not None:
warnings.warn("The labels parameter is unused. It was"
" deprecated in version 0.21 and"
" will be removed in version 0.23",
FutureWarning)

if sample_weight is None:
weight_average = 1.
else:
Expand Down
5 changes: 0 additions & 5 deletions sklearn/metrics/tests/test_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -1176,11 +1176,6 @@ def test_multilabel_hamming_loss():
assert hamming_loss(y1, np.zeros_like(y1), sample_weight=w) == 2. / 3
# sp_hamming only works with 1-D arrays
assert hamming_loss(y1[0], y2[0]) == sp_hamming(y1[0], y2[0])
assert_warns_message(FutureWarning,
"The labels parameter is unused. It was"
" deprecated in version 0.21 and"
" will be removed in version 0.23",
hamming_loss, y1, y2, labels=[0, 1])


def test_jaccard_score_validation():
Expand Down
2 changes: 0 additions & 2 deletions sklearn/metrics/tests/test_common.py
Original file line number Diff line number Diff line change
Expand Up @@ -351,8 +351,6 @@ def precision_recall_curve_padded_thresholds(*args, **kwargs):
"roc_curve",
"precision_recall_curve",

"hamming_loss",

"precision_score", "recall_score", "f1_score", "f2_score", "f0.5_score",
"jaccard_score",

Expand Down
21 changes: 7 additions & 14 deletions sklearn/preprocessing/_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -2606,8 +2606,8 @@ def quantile_transform(X, axis=0, n_quantiles=1000,
input is already a numpy array). If True, a copy of `X` is transformed,
leaving the original `X` unchanged

..versionchnanged:: 0.22
The default value of `copy` changed from False to True in 0.22.
..versionchnanged:: 0.23
The default value of `copy` changed from False to True in 0.23.

Returns
-------
Expand Down Expand Up @@ -3008,7 +3008,7 @@ def _more_tags(self):
return {'allow_nan': True}


def power_transform(X, method='warn', standardize=True, copy=True):
def power_transform(X, method='yeo-johnson', standardize=True, copy=True):
"""
Power transforms are a family of parametric, monotonic transformations
that are applied to make data more Gaussian-like. This is useful for
Expand All @@ -3032,15 +3032,15 @@ def power_transform(X, method='warn', standardize=True, copy=True):
X : array-like, shape (n_samples, n_features)
The data to be transformed using a power transformation.

method : str
method : {'yeo-johnson', 'box-cox'}, default='yeo-johnson'
The power transform method. Available methods are:

- 'yeo-johnson' [1]_, works with positive and negative values
- 'box-cox' [2]_, only works with strictly positive values

The default method will be changed from 'box-cox' to 'yeo-johnson'
in version 0.23. To suppress the FutureWarning, explicitly set the
parameter.
.. versionchanged:: 0.23
The default value of the `method` parameter changed from
'box-cox' to 'yeo-johnson' in 0.23.

standardize : boolean, default=True
Set to True to apply zero-mean, unit-variance normalization to the
Expand Down Expand Up @@ -3092,12 +3092,5 @@ def power_transform(X, method='warn', standardize=True, copy=True):
.. [2] G.E.P. Box and D.R. Cox, "An Analysis of Transformations", Journal
of the Royal Statistical Society B, 26, 211-252 (1964).
"""
if method == 'warn':
warnings.warn("The default value of 'method' will change from "
"'box-cox' to 'yeo-johnson' in version 0.23. Set "
"the 'method' argument explicitly to silence this "
"warning in the meantime.",
FutureWarning)
method = 'box-cox'
pt = PowerTransformer(method=method, standardize=standardize, copy=copy)
return pt.fit_transform(X)
18 changes: 0 additions & 18 deletions sklearn/preprocessing/tests/test_data.py
Original file line number Diff line number Diff line change
Expand Up @@ -2452,21 +2452,3 @@ def test_power_transformer_copy_False(method, standardize):

X_inv_trans = pt.inverse_transform(X_trans)
assert X_trans is X_inv_trans


def test_power_transform_default_method():
X = np.abs(X_2d)

future_warning_message = (
"The default value of 'method' "
"will change from 'box-cox'"
)
assert_warns_message(FutureWarning, future_warning_message,
power_transform, X)

with warnings.catch_warnings():
warnings.simplefilter('ignore')
X_trans_default = power_transform(X)

X_trans_boxcox = power_transform(X, method='box-cox')
assert_array_equal(X_trans_boxcox, X_trans_default)
Loading