Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Interpolation of ROC and PR curve metrics #18135

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
tadorfer opened this issue Aug 10, 2020 · 2 comments
Closed

Interpolation of ROC and PR curve metrics #18135

tadorfer opened this issue Aug 10, 2020 · 2 comments

Comments

@tadorfer
Copy link

tadorfer commented Aug 10, 2020

Proposed feature

I have often had to use interpolation to plot the mean ROC or PR curves of several classifiers, as the number of thresholds can be distinct for each und thus their TPR, FPR, precision, and recall lengths vary. I thought it would be convenient to add a function argument "interp_dim" to the functions "roc_curve" and "precision_recall_curve" so the user can ensure that all classifier metrics have the same lengths, which would make it easier to plot the average.

Proposed solution

Perform numpy interpolation on TPR, FPR, precision and recall arrays to ensure they have the same length (as specified by the "interp_dim" function argument), which simplifies the computation of the mean.

I opened this issue to see if this idea receives support from the community before making a pull request.

@rth
Copy link
Member

rth commented Aug 11, 2020

Thanks @tadorfer , for the PR curves there was an earlier discussion in #4577

@thomasjpfan
Copy link
Member

As part of scikit-learn's triaging guidelines, I am closing this issue because it is a duplicate of #4577.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants