Thanks to visit codestin.com
Credit goes to github.com

Skip to content

multiclass jaccard_similarity_score should not be equal to accuracy_score #7332

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
untom opened this issue Sep 2, 2016 · 18 comments · Fixed by #13151
Closed

multiclass jaccard_similarity_score should not be equal to accuracy_score #7332

untom opened this issue Sep 2, 2016 · 18 comments · Fixed by #13151
Labels
Bug Easy Well-defined and straightforward way to resolve help wanted

Comments

@untom
Copy link
Contributor

untom commented Sep 2, 2016

The documentation for sklearn.metrics.jaccard_similarity_score currently (version 0.17.1) states that:

In binary and multiclass classification, this function is equivalent to the accuracy_score. It differs in the multilabel classification problem.

However, I do not think that this is the right thing to do for multiclass-problems. As far as I can tell, within the machine learning community a more common usage of the Jaccard index for multi-class is to
use the mean Jaccard-Index calculated for each class indivually. i.e., first calculate the jaccard index for class 0, class 1 and class 2, and then average them. This is what is very commonly done in the image segmentation community (where this is referred to as the "mean Intersection over Union" score (see e.g.[1]), but as far as I can tell by skimming it, this is also what the original publication of the jaccard index did in multiclass scenarios [2]. Note that this is NOT the same as the accuracy_score. Consider this example:

y_true = [0, 1, 2]
y_pred = [0, 0, 0]

The accuracy is clearly 1/3, and this is also what the jaccard_score in sklearn currently returns. The class-specific jaccard_scores would be:

J0 = 1 /3
J1 = 0 / 1
J2 = 0 / 1

Thus IMO the jaccard_score should be (J0 + J1 + J2) / 3 = 1/9 in this case

[1] e.g. Long et al, "The Pascal Visual Object Classes Challenge – a Retrospective", https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf , but see any other paper on Semantic Segmentation

[2] Jaccard, "THE DISTRIBUTION OF THE FLORA IN THE ALPINE ZONE", http://onlinelibrary.wiley.com/doi/10.1111/j.1469-8137.1912.tb05611.x/abstract (Note that I have only skimmed the paper, but it seems to me that the author always reports the average of the "efficient of community" calculated over pairs whenever the author compares more than just 2 groups)

@amueller
Copy link
Member

@untom The Pascal VOC is multi-class multi-label, right? Pixels are not evaluated individually, but the whole image is. And then people usually look at per-class measures.

But you're right, it looks like the original definition is different from ours, see https://encrypted.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=3&cad=rja&uact=8&ved=0ahUKEwjT1-XGvY_PAhXGeD4KHSFLBUAQFggwMAI&url=http%3A%2F%2Fwww.informatica.si%2Findex.php%2Finformatica%2Farticle%2Fdownload%2F753%2F608&usg=AFQjCNFd6KF03h8j6Yfk_6hQbx6oeBZP8g&sig2=1OOs_6BFsyaPcxGvPbEtLQ
Where is it in the original paper?

@hccheng
Copy link

hccheng commented Oct 27, 2016

According to the evaluation code (http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCdevkit_18-May-2011.tar), the multi-class VOC score is the average of per-class Jaccard scores.
The relevant part is from line 74 to line 90 in VOCevalseg.m

@shiba24
Copy link

shiba24 commented Feb 21, 2017

@untom hi, i think you are right.

Currently jaccard_similarity_score just counts the samples in intersection of pred and true, which is not correct for the definition of Jaccard similarity coefficient. We need to calculate union.

https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/metrics/classification.py#L174

I am not sure what it looks like for multi label.

@jnothman jnothman added the Bug label Feb 21, 2017
@jnothman
Copy link
Member

I agree that this seems to be strange even for the binary case. I would have thought Jaccard is an alternative to precision or recall or F1 (= Dice coefficient) in evaluating performance, in the binary case, on a single positive class, i.e. "true positives / (true positives + false positives + false negatives)". In particular, the binary implementation in our case does not seem to equate to the multilabel implementation run over a single class.

Regarding @untom's initial contention that multiclass implementation is incorrect, I agree that the multiclass implementation is useless. I don't think that the macro averaging he suggests is the only way to go about it, and as with P/R/F, micro-averaging excluding a majority negative class is still meaningful; weighted macro-average may also be feasible.

So yes, multiple strange things in our jaccard implementation IMO, and at a glance I don't see how the reference given in #1795 tells us about the multiclass case.

Labelling this a bug.

@ghost
Copy link

ghost commented Oct 20, 2017

Any update on this issue? I believe this issue should be assigned as a high priority bug. Incorrect evaluation metrics could mislead many people in interpreting and reporting their experimental results. I may consider submitting a patch if no one else is going to work on it.

@jnothman jnothman added Easy Well-defined and straightforward way to resolve help wanted labels Oct 21, 2017
@jnothman
Copy link
Member

Yes, probably. PR welcome

@gxyd
Copy link
Contributor

gxyd commented Oct 26, 2017

@jacobdang polite ping. Are you still working? It is very much fine if you are still working but if you are not, then this is looks like a really good issue I would like to work on.

@ghost
Copy link

ghost commented Oct 26, 2017

@gxyd I am catching a deadline so may not be able to work on it immediately. So if you could help that will be highly appreciated. Thanks. 👍

@dimimal
Copy link

dimimal commented Mar 6, 2018

Any progress about this issue yet?

@jnothman
Copy link
Member

jnothman commented Mar 6, 2018

Waiting for a second review at #10083

@jnothman
Copy link
Member

jnothman commented Mar 6, 2018

Unfortunately the code there is quite complicated, and perhaps could be simplified if something like #10628 existed, but I haven't had time to make that work let alone bring it up to mergeable standard.

@agamemnonc
Copy link
Contributor

So, is the Jaccard similarity score a valid metric to use for multilabel classification problems?

@jnothman
Copy link
Member

jnothman commented Jul 22, 2018 via email

@dimimal
Copy link

dimimal commented Jul 26, 2018

@agamemnonc Only if you want the average score across the samples. It is totally different than intersection over union.

@jnothman
Copy link
Member

jnothman commented Jul 26, 2018 via email

@TSchattschneider
Copy link

TSchattschneider commented Feb 4, 2019

This problem in scikit-learn has just recently caused some big headache for me in my research.
I work with segmentation and was surprised to see how scikit-learn interprets the jaccard index metric.

@dimimal
Copy link

dimimal commented Feb 4, 2019

@TSchattschneider I feel you. I remember how frustrated I was. I had the same problem. This bug should be flagged in the documentation, unless is fixed.

@jnothman
Copy link
Member

jnothman commented Feb 4, 2019 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment