-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
DOC Add quantile loss to user guide on HGBT regression #29063
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-authored-by: Guillaume Lemaitre <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some feedback. I noticed that GradientBoostingRegressor
has the "huber" loss which HistGradientBoostingRegressor
does not provide and on the contrary, GradientBoostingRegressor
is lacking "poisson" and "gamma".
The latter is kind of related to #16668 but I don't think we have an issue to add the huber loss to HistGradientBoostingRegressor
(although I am not sure how useful it would be in practice because it does not really have any meaningful probablistic interpretation as far as I know.
Co-authored-by: Olivier Grisel <[email protected]>
Since the different comments have been addressed, I'm going to merge. Thanks @ArturoAmorQ |
…29063) Co-authored-by: ArturoAmorQ <[email protected]> Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: Olivier Grisel <[email protected]>
…29063) Co-authored-by: ArturoAmorQ <[email protected]> Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: Olivier Grisel <[email protected]>
Co-authored-by: ArturoAmorQ <[email protected]> Co-authored-by: Guillaume Lemaitre <[email protected]> Co-authored-by: Olivier Grisel <[email protected]>
Reference Issues/PRs
Follows #28652.
What does this implement/fix? Explain your changes.
As mentioned in #28652 (comment), the description of HGBT regression losses was missing the quantile loss. This PR fixes the issue.
Any other comments?
In the same comment it is mentioned that
For a follow-up PR maybe we can consider refactoring the narrative to first introduce common mathematical aspects of GBT models and then the HGBT version. WDYT?