Thanks to visit codestin.com
Credit goes to github.com

Skip to content

einsum broadcast regression (with optimize=True) #10343

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
d70-t opened this issue Jan 8, 2018 · 8 comments · Fixed by #10352
Closed

einsum broadcast regression (with optimize=True) #10343

d70-t opened this issue Jan 8, 2018 · 8 comments · Fixed by #10352

Comments

@d70-t
Copy link

d70-t commented Jan 8, 2018

In numpy 1.13.3, it was possible to execute the following snippet without errors, while in 1.14.0 this happens:

In [1]: import numpy as np
In [2]: a = np.ones((10,2))
In [3]: b = np.ones((1,2))
In [4]: np.einsum('t...i,ti->t...', a, b)
Traceback (most recent call last):
  File "<ipython-input-4-fa62d1d882f9>", line 1, in <module>
    np.einsum('t...i,ti->t...', a, b)
  File "/usr/local/lib/python2.7/dist-packages/numpy/core/einsumfunc.py", line 1087, in einsum
    einsum_call=True)
  File "/usr/local/lib/python2.7/dist-packages/numpy/core/einsumfunc.py", line 710, in einsum_path
    "not match previous terms.", char, tnum)
ValueError: ("Size of label '%s' for operand %d does not match previous terms.", 't', 1)

However optimize=False solves the problem:

In [5]: np.einsum('t...i,ti->t...', a, b, optimize=False)
Out[5]: array([2., 2., 2., 2., 2., 2., 2., 2., 2., 2.])

Is this intended behavior and the user is now responsible to explicitly disable optimization or is this a bug?

@seberg seberg added this to the 1.14.1 release milestone Jan 8, 2018
@charris
Copy link
Member

charris commented Jan 8, 2018

Both? optimize=True was made the default, but it shouldn't be causing errors.

@dgasmith Another one ;)

@eric-wieser
Copy link
Member

Seems we failed to format the error message correctly either

@mhvk
Copy link
Contributor

mhvk commented Jan 8, 2018

This hit at least one user of astropy (though not our tests, annoyingly!). The following breaks (but works with optimize=False):

np.einsum('...ij,j...->i...', np.eye(3)[np.newaxis], np.array([[1., 1.], [2., 2.], [3., 3.]]))

@mhvk
Copy link
Contributor

mhvk commented Jan 10, 2018

The fix in #10352 seems correct. Note also that the current optimization choice makes no sense: for 2 argument, optimization should not be attempted; see #10357.

@mhvk
Copy link
Contributor

mhvk commented Jan 10, 2018

A very simple fix for the slowdown is in #10359 - I think we should consider reverting the change in default of optimize, though. In the end, einsum is very likely used mostly with small numbers of arrays, for which the additional cost is not worth it. It seems more likely that users with long expressions will read the docstring and realize there is an optimize keyword than that users with short ones will realize the default is bad for their case.

joostvanzwieten added a commit to joostvanzwieten/nutils that referenced this issue Jan 12, 2018
`einsum` in this version of numpy does not support single expansion.  See
[numpy#10343].

[numpy#10343]: numpy/numpy#10343
joostvanzwieten added a commit to joostvanzwieten/nutils that referenced this issue Jan 12, 2018
This circumvents a bug in numpy 1.14.0: `einsum` does not support single
expansion.  See [numpy#10343].

[numpy#10343]: numpy/numpy#10343
joostvanzwieten added a commit to joostvanzwieten/nutils that referenced this issue Jan 12, 2018
This circumvents a bug in numpy 1.14.0: `einsum` with the now default
`optimize=True` does not support single expansion (see [numpy#10343]).

[numpy#10343]: numpy/numpy#10343
joostvanzwieten added a commit to joostvanzwieten/nutils that referenced this issue Jan 12, 2018
This circumvents a bug in numpy 1.14.0: `einsum` with the now default
`optimize=True` does not support singleton expansion (see [numpy#10343]).

[numpy#10343]: numpy/numpy#10343
@mhvk
Copy link
Contributor

mhvk commented Jan 14, 2018

@charris - this issue is biting lots of astropy users now (astropy/astropy#7051 (comment)) as it screws up one of our coordinate transformations. Those are all with 3x3 matrices, so pending not just a solution but also a better understanding of impacts of performance, I would indeed suggest to revert the default for 1.14.1. Obviously, we also need #10352 and especially its tests so this doesn't recur! (We missed it in astropy in part because for python3 our transformations use matmul -- guess we should test our last python2 version also with numpy-dev...)

charris added a commit to charris/numpy that referenced this issue Jan 14, 2018
There have been bugs reported with einsum with the `optimize=True`
default, see numpy#10343. This PR makes the default `False` for the 1.14.1
release. Because optimizing the contractions is not a trivial problem,
this will allow us to pursue fixes in the longer time frame of the 1.15
release.
@charris
Copy link
Member

charris commented Jan 14, 2018

@mhvk See #10403.

@mhvk
Copy link
Contributor

mhvk commented Jan 14, 2018

OK, good, that gives more time.

mattjj added a commit to HIPS/autograd that referenced this issue Feb 1, 2018
VikingScientist pushed a commit to VikingScientist/nutils that referenced this issue Mar 3, 2018
This circumvents a bug in numpy 1.14.0: `einsum` with the now default
`optimize=True` does not support singleton expansion (see [numpy#10343]).

[numpy#10343]: numpy/numpy#10343
NileGraddis pushed a commit to AllenInstitute/datacube that referenced this issue May 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants