Closed
Description
Current code for precision_recall_curve assumes the curve always passes through (recall=0, precision=1). However this is not the case.
For instance,
pred_proba = [0.8, 0.8, 0.8, 0.2, 0.2]
true_value = [0, 0, 1, 1, 0]
metrics.precision_recall_curve(true_value, pred_proba) will return
precision = [ 0.4 0.33333333 1. ]
recall = [ 1. 0.5 0. ]
the result's not correct and actually in favor of the poor model (the model misclassified points with high-score will have more area under the curve).
Be careful when using auc based on metrics.ranking.precision_recall_curve before the bug is solved