Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit c0ed2af

Browse files
author
Fabian Pedregosa
committed
DOC: restructure docstring of ElasticNet.
If found it confusing that remarks about rho parameter where scattered in heading and notes, moved everything related to the objective function into the main part. No new content was added.
1 parent 9d638ec commit c0ed2af

File tree

1 file changed

+18
-19
lines changed

1 file changed

+18
-19
lines changed

sklearn/linear_model/coordinate_descent.py

Lines changed: 18 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -20,8 +20,24 @@
2020
class ElasticNet(LinearModel):
2121
"""Linear Model trained with L1 and L2 prior as regularizer
2222
23-
rho = 1 is the lasso penalty. Currently, rho <= 0.01 is not
24-
reliable, unless you supply your own sequence of alpha.
23+
Minimizes the objective function::
24+
25+
1 / (2 * n_samples) * ||y - Xw||^2_2 +
26+
+ alpha * rho * ||w||_1 + 0.5 * alpha * (1 - rho) * ||w||^2_2
27+
28+
If you are interested in controlling the L1 and L2 penalty
29+
separately, keep in mind that this is equivalent to::
30+
31+
a * L1 + b * L2
32+
33+
where::
34+
35+
alpha = a + b and rho = a / (a + b)
36+
37+
The parameter rho corresponds to alpha in the glmnet R package while
38+
alpha corresponds to the lambda parameter in glmnet. Specifically, rho =
39+
1 is the lasso penalty. Currently, rho <= 0.01 is not reliable, unless
40+
you supply your own sequence of alpha.
2541
2642
Parameters
2743
----------
@@ -63,23 +79,6 @@ class ElasticNet(LinearModel):
6379
-----
6480
To avoid unnecessary memory duplication the X argument of the fit method
6581
should be directly passed as a fortran contiguous numpy array.
66-
67-
The parameter rho corresponds to alpha in the glmnet R package
68-
while alpha corresponds to the lambda parameter in glmnet.
69-
More specifically, the objective function is::
70-
71-
1 / (2 * n_samples) * ||y - Xw||^2_2 +
72-
+ alpha * rho * ||w||_1 + 0.5 * alpha * (1 - rho) * ||w||^2_2
73-
74-
If you are interested in controlling the L1 and L2 penalty
75-
separately, keep in mind that this is equivalent to::
76-
77-
a * L1 + b * L2
78-
79-
for::
80-
81-
alpha = a + b and rho = a / (a + b)
82-
8382
"""
8483
def __init__(self, alpha=1.0, rho=0.5, fit_intercept=True,
8584
normalize=False, precompute='auto', max_iter=1000,

0 commit comments

Comments
 (0)