Thanks to visit codestin.com
Credit goes to github.com

Skip to content

DOC add interval range for parameter of SGDRegressor #28373

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Feb 9, 2024
Merged
Changes from all commits
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 19 additions & 6 deletions sklearn/linear_model/_stochastic_gradient.py
Original file line number Diff line number Diff line change
Expand Up @@ -1061,10 +1061,10 @@ class SGDClassifier(BaseSGDClassifier):
The initial learning rate for the 'constant', 'invscaling' or
'adaptive' schedules. The default value is 0.0 as eta0 is not used by
the default schedule 'optimal'.
Values must be in the range `(0.0, inf)`.
Values must be in the range `[0.0, inf)`.

power_t : float, default=0.5
The exponent for inverse scaling learning rate [default 0.5].
The exponent for inverse scaling learning rate.
Values must be in the range `(-inf, inf)`.

early_stopping : bool, default=False
Expand Down Expand Up @@ -1789,14 +1789,15 @@ class SGDRegressor(BaseSGDRegressor):

alpha : float, default=0.0001
Constant that multiplies the regularization term. The higher the
value, the stronger the regularization.
Also used to compute the learning rate when set to `learning_rate` is
set to 'optimal'.
value, the stronger the regularization. Also used to compute the
learning rate when `learning_rate` is set to 'optimal'.
Values must be in the range `[0.0, inf)`.

l1_ratio : float, default=0.15
The Elastic Net mixing parameter, with 0 <= l1_ratio <= 1.
l1_ratio=0 corresponds to L2 penalty, l1_ratio=1 to L1.
Only used if `penalty` is 'elasticnet'.
Values must be in the range `[0.0, 1.0]`.

fit_intercept : bool, default=True
Whether the intercept should be estimated or not. If False, the
Expand All @@ -1806,6 +1807,7 @@ class SGDRegressor(BaseSGDRegressor):
The maximum number of passes over the training data (aka epochs).
It only impacts the behavior in the ``fit`` method, and not the
:meth:`partial_fit` method.
Values must be in the range `[1, inf)`.

.. versionadded:: 0.19

Expand All @@ -1815,6 +1817,7 @@ class SGDRegressor(BaseSGDRegressor):
epochs.
Convergence is checked against the training loss or the
validation loss depending on the `early_stopping` parameter.
Values must be in the range `[0.0, inf)`.

.. versionadded:: 0.19

Expand All @@ -1823,6 +1826,7 @@ class SGDRegressor(BaseSGDRegressor):

verbose : int, default=0
The verbosity level.
Values must be in the range `[0, inf)`.

epsilon : float, default=0.1
Epsilon in the epsilon-insensitive loss functions; only if `loss` is
Expand All @@ -1831,6 +1835,7 @@ class SGDRegressor(BaseSGDRegressor):
important to get the prediction exactly right.
For epsilon-insensitive, any differences between the current prediction
and the correct label are ignored if they are less than this threshold.
Values must be in the range `[0.0, inf)`.

random_state : int, RandomState instance, default=None
Used for shuffling the data, when ``shuffle`` is set to ``True``.
Expand All @@ -1855,9 +1860,11 @@ class SGDRegressor(BaseSGDRegressor):
eta0 : float, default=0.01
The initial learning rate for the 'constant', 'invscaling' or
'adaptive' schedules. The default value is 0.01.
Values must be in the range `[0.0, inf)`.

power_t : float, default=0.25
The exponent for inverse scaling learning rate.
Values must be in the range `(-inf, inf)`.

early_stopping : bool, default=False
Whether to use early stopping to terminate training when validation
Expand All @@ -1874,6 +1881,7 @@ class SGDRegressor(BaseSGDRegressor):
The proportion of training data to set aside as validation set for
early stopping. Must be between 0 and 1.
Only used if `early_stopping` is True.
Values must be in the range `(0.0, 1.0)`.

.. versionadded:: 0.20
Added 'validation_fraction' option
Expand All @@ -1883,6 +1891,7 @@ class SGDRegressor(BaseSGDRegressor):
fitting.
Convergence is checked against the training loss or the
validation loss depending on the `early_stopping` parameter.
Integer values must be in the range `[1, max_iter)`.

.. versionadded:: 0.20
Added 'n_iter_no_change' option
Expand Down Expand Up @@ -2058,10 +2067,12 @@ class SGDOneClassSVM(BaseSGD, OutlierMixin):
The maximum number of passes over the training data (aka epochs).
It only impacts the behavior in the ``fit`` method, and not the
`partial_fit`. Defaults to 1000.
Values must be in the range `[1, inf)`.

tol : float or None, default=1e-3
The stopping criterion. If it is not None, the iterations will stop
when (loss > previous_loss - tol). Defaults to 1e-3.
Values must be in the range `[0.0, inf)`.

shuffle : bool, default=True
Whether or not the training data should be shuffled after each epoch.
Expand Down Expand Up @@ -2094,9 +2105,11 @@ class SGDOneClassSVM(BaseSGD, OutlierMixin):
The initial learning rate for the 'constant', 'invscaling' or
'adaptive' schedules. The default value is 0.0 as eta0 is not used by
the default schedule 'optimal'.
Values must be in the range `[0.0, inf)`.

power_t : float, default=0.5
The exponent for inverse scaling learning rate [default 0.5].
The exponent for inverse scaling learning rate.
Values must be in the range `(-inf, inf)`.

warm_start : bool, default=False
When set to True, reuse the solution of the previous call to fit as
Expand Down