-
-
Notifications
You must be signed in to change notification settings - Fork 25.9k
[MRG+1] Fix scipy-dev-wheels build because numpy.core.umath_tests has been renamed #10880
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
numpy.core.umath_tests has been moved to private modules in numpy dev.
* (((n_classes - 1.) / n_classes) * | ||
inner1d(y_coding, np.log(y_predict_proba)))) | ||
* ((n_classes - 1.) / n_classes) | ||
* (y_coding * np.log(y_predict_proba)).sum(axis=1)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found that np.einsum('ij,ij->i', y_coding, np.log(y_predict_proba))
was another possibility but I find it harder to grok (maybe because I don't use einsum very regularly).
Note that (at least in the tests) both y_coding
and y_predict_proba
shapes are (n, p)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't this be done with np.dot
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how but maybe I am missing a numpy trick. In the tests y_coding
and y_predict_proba
are 2d of shape (n_samples, n_classes)
and you want an output that is of shape (n_samples,)
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I am going to merge this one, this should fix the scipy-dev-wheels build. As any fix of the scipy-dev-wheels build, we need to think about the implications on the latest scikit-learn release. In this case, when numpy 1.15 is released, that means that any import from Quickly looking at the numpy releases since 1.12, it seems to happen every every 6 months roughly. This means the numpy 1.15 is likely to happen before the next scikit-learn release. Possible options I can see, opinions more than welcome:
|
Looking at numpy/numpy#10790 it seems like it was removed mostly as a side effect of ongoing work of migrating to pytest, done just 10 days ago. And there were expressed concerns about breaking users code in numpy/numpy#10790 (comment), so I think commenting there might be a good place to start.. I'll leave it to you @lesteve as I'm not aware of all the details..
I guess it also depends what's the expected timeline for 0.20 and if there are any other critical issues that it would be worth backporting.. |
For the record I commented on the numpy issue, let's see what they say. |
If you find the function particularly useful, you should propose adding it to NumPy. I think it was just for testing generalized functions. |
I think we were just using FYI I opened numpy/numpy#10845 about the possibility of reverting the renaming of the numpy C test modules. |
Our scipy-dev-wheels has been broken for 2 days, see for example this one.
numpy.core.umath_tests has been moved to private modules in numpy dev. See numpy/numpy#10815 where I added more details.
It seems like inner1d can be replaced by using standard constructs. I ran the tests to make sure that the results from inner1d were matching the ones in this PR (they do match according to assert_allclose not assert_array_equal).
Travis Cron job running on my fork, which is the only way to make sure that the Cron job will get fixed by this PR (the CI statuses in this PR just make sure that I did not break anything but do not test that I fixed the scipy-dev-wheels build).