Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Add scoring metric info in cross_val_score #13445

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 14, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion sklearn/model_selection/_validation.py
Original file line number Diff line number Diff line change
Expand Up @@ -281,7 +281,13 @@ def cross_val_score(estimator, X, y=None, groups=None, scoring=None, cv='warn',
scoring : string, callable or None, optional, default: None
A string (see model evaluation documentation) or
a scorer callable object / function with signature
``scorer(estimator, X, y)``.
``scorer(estimator, X, y)`` which should return only
a single value.

Similar to :func:`cross_validate`
but only a single metric is permitted.

If None, the estimator's default scorer (if available) is used.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just say "the estimator's score method"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jnothman I thought this was more consistent with cross_validate documentation here but if you still want me to change, then I will

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with it being consistent... I'd be more happy with them all being changed :) maybe in a subsequent PR?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jnothman a separate PR is best as it hits files other than _validation.py as well, such as scorer.py in sklearn/metrics/ and _search.py in the same folder. I can get started on that PR if you can merge this one as it may conflict with this file.


cv : int, cross-validation generator or an iterable, optional
Determines the cross-validation splitting strategy.
Expand Down