-
-
Notifications
You must be signed in to change notification settings - Fork 11k
[numpy-1.19.3 , py3.8.5, Linux Ubuntu Env] Hstack method -> Unable to allocate memory , 180 KiB #17684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Here's the code that emits that error: numpy/numpy/core/src/multiarray/ctors.c Lines 809 to 834 in 08f9eeb
There are only two ways that
Does this error happen every time, or is it intermittent? |
This is a bit scary. Did this issue appear with 1.19.3 and not with 1.19.2? Our cache could corrupt, but it seems to me the main way that might happen is if we free an array without holding the GIL by accident? It would be interesting if you could provide a more complete example? I am guessing here that the |
I'd really like to know that also, as 1.19.4 is in preparation. Is the issue repeatable? |
Uff, there are very few changes aside from the OpenBLAS one so it is a bit hard to imagine what might be going on, unless it has to do with the wheel rather than the changes? Or it is some existing race condition that just didn't show up before? Is there any chance of being able to give a full reproducing example? Running it in |
This leaves me in a quandary :( I suppose the safe thing to do would be to put out a 1.19.4rc1. |
Scary. We believe we are using newer docker images for the manylinux releases. |
@MarkBel Could you try installing 1.19.4 from the staging repo?
I'd like to get 1.19.4 tested before uploading to PyPI. |
I will do my best tmr and give you the feedback back. |
Thanks! |
@charris Coming back with the positive feedback, I have just upgraded numpy to 1.19.4 from the staging repo as you suggested and no issue has occurred regarding memory allocation, here we go! |
@MarkBel Thanks. |
I still see the issue by calling
logs:
|
Would anyone be able to create a minimal, shareable example? Otherwise it is very hard to dig into it to see what is going on. There are few changes in 1.19.3 but it seems a bit like there might be a subtle but serious bug somewhere, and any starting point to find it would be very useful. |
I tried reproing the issue, but it gives no issue with single image file. Can this be the problem because program tries to allocate more memory than available in stack? In my case I have to calculate cumsum of around 1400 images. |
If each image is 23.7MB, then you would need |
Uh oh!
There was an error while loading. Please reload this page.
The error has occured when using hstack, with array with shape (700, 33) and data type float64. The problem is not regarding the system as $ echo 1 > /proc/sys/vm/overcommit_memory
This will enable "always overcommit" mode, and you'll find that indeed the system will allow you to make the allocation no matter how large it is (within 64-bit memory addressing at least). The usage just required 180 KiB of space, so it can not be the root cause of the problem.
Reproducing code example:
Error message:
NumPy/Python version information:
numpy-1.19.3
The text was updated successfully, but these errors were encountered: