-
-
Notifications
You must be signed in to change notification settings - Fork 25.8k
[MRG+1] Convert y in GradientBoosting to float64 instead of float32 #13524
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -1432,7 +1432,7 @@ def fit(self, X, y, sample_weight=None, monitor=None): | |||
self._clear_state() | |||
|
|||
# Check input | |||
X, y = check_X_y(X, y, accept_sparse=['csr', 'csc', 'coo'], dtype=DTYPE) | |||
X = check_array(X, accept_sparse=['csr', 'csc', 'coo'], dtype=DTYPE) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's worth commenting why check_X_y is not used.
@@ -44,7 +44,7 @@ | |||
from time import time | |||
from ..model_selection import train_test_split | |||
from ..tree.tree import DecisionTreeRegressor | |||
from ..tree._tree import DTYPE | |||
from ..tree._tree import DTYPE, DOUBLE |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is interesting how this is our naming convention for the dtypes of X and y. In the future we may consider X_DTYPE
and Y_DTYPE
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I know. You're the second +1 here now @thomasjpfan, wanna push the green button? ;)
Ta @adrinjalali |
…32 (scikit-learn#13524)" This reverts commit 2502111.
…32 (scikit-learn#13524)" This reverts commit 2502111.
Fixes #9098
If not float64, the decision trees would each convert it to a double.