Thanks to visit codestin.com
Credit goes to github.com

Skip to content

DOC Add quantile loss to user guide on HGBT regression #29063

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Jul 23, 2024
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 15 additions & 8 deletions doc/modules/ensemble.rst
Original file line number Diff line number Diff line change
Expand Up @@ -98,14 +98,21 @@ controls the number of iterations of the boosting process::
>>> clf.score(X_test, y_test)
0.8965

Available losses for regression are 'squared_error',
'absolute_error', which is less sensitive to outliers, and
'poisson', which is well suited to model counts and frequencies. For
classification, 'log_loss' is the only option. For binary classification it uses the
binary log loss, also known as binomial deviance or binary cross-entropy. For
`n_classes >= 3`, it uses the multi-class log loss function, with multinomial deviance
and categorical cross-entropy as alternative names. The appropriate loss version is
selected based on :term:`y` passed to :term:`fit`.
Available losses for **regression** are:

- 'squared_error', which is the default loss;
- 'absolute_error', which is less sensitive to outliers than the squared error;
- 'gamma', which is well suited to model strictly positive outcomes;
- 'poisson', which is well suited to model counts and frequencies;
- 'quantile', which allows for estimating a conditional quantile that can later
be used to obtain prediction intervals.

For **classification**, 'log_loss' is the only option. For binary classification
it uses the binary log loss, also known as binomial deviance or binary
cross-entropy. For `n_classes >= 3`, it uses the multi-class log loss function,
with multinomial deviance and categorical cross-entropy as alternative names.
The appropriate loss version is selected based on :term:`y` passed to
:term:`fit`.

The size of the trees can be controlled through the ``max_leaf_nodes``,
``max_depth``, and ``min_samples_leaf`` parameters.
Expand Down