-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
segfault in any() on large object array (Trac #1522) #2119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
@pv wrote on 2010-06-27 Can you attach a minimal test case that exhibits this crash? Your array is actually an object array (the gdb backtrace shows this). How do you actually load it from the text file? What platform (64-bit linux), etc? |
@pv wrote on 2010-06-27 Ok, thanks, I managed to reproduce it now:
|
Title changed from |
trac user glub wrote on 2010-06-27 Sorry for leaving out the example. This happened on 64-bit Fedora 13, python 2.6.4. |
trac user jpeel wrote on 2010-12-29 The limit seems to be 9999 objects. 10000 segfaults. |
trac user jpeel wrote on 2010-12-29 Well, I'm flummoxed, but the only significance of the number 10000 is that is when the BUFFER_UFUNCLOOP method is used. The problem, I believe, has something to do with the casting of the object to a boolean such that the result turns out to be an invalid object. Then, since the cast object is invalid, there is a segfault when Py_XINCREF is used on it. |
Attachment added by trac user jpeel on 2011-01-04: 0001-BF-added-fix-for-segfault-when-using-any-on-large-ar.patch |
trac user jpeel wrote on 2011-01-04 I've found the problem. The beginning of the process for when the buffer is used (# of objects >= 10000) is as follows. The first object of the array is copied and then cast as a Bool. However, if either the input array contains objects or the output will be of type object, then loop->obj was set to 1 in construct_reduce() and that signals that the cast object (in this case to a Bool) should be INCREFed. However, since the object is a Bool rather than a PyObject, a segfault occurs. This same problem doesn't happen with smaller arrays because in that case the first object is copied without being cast and then INCREFed. Since the object is a PyObject, there isn't a problem. The patch that I submitted just removes the lines that INCREF when loop->obj is set and the array is large enough to trigger buffering. The only potential problem with doing this is if a ufunc generates an object as output, but I don't see that really as a possibility. Does anyone have any problem with this fix? The alternative is to make different loop->obj flags: one for when the input is an object and one for when the output is an object. |
Pretty sure this is fixed. This bug was reported before the ufunc machinery was reworked to use the nditer. I am unable to reproduce with WinXP/numpy 1.6.2 or current master on linux64. |
I don't see a segfault, but this is fishy.
Should probably return a boolean. Same with |
I'll open another issue for that. |
Original ticket http://projects.scipy.org/numpy/ticket/1522 on 2010-06-27 by trac user glub, assigned to unknown.
I am consistently getting segfaults when I call any() on a particular array of 42364 elements. I will try to attach the array as a text file. There doesn't appear to be anything interesting about the array except that it does not have any zeros and appears to have a lot of smallish, duplicate values. I got the same segfault in 1.3.0, 1.4.1, and r8464 from svn:
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff03eac2f in PyUFunc_Reduce (self=, args=, kwds=, operation=)
at numpy/core/src/umath/ufunc_object.c:2785
2785 Py_XINCREF(((PyObject *)loop->castbuf));
(gdb) bt
#0 0x00007ffff03eac2f in PyUFunc_Reduce (self=, args=, kwds=, operation=)
#1 PyUFunc_GenericReduction (self=, args=, kwds=, operation=)
#2 0x00000035d1043db3 in PyObject_Call () from /usr/lib64/libpython2.6.so.1.0
#3 0x00007ffff065ecbe in PyArray_GenericReduceFunction (m1=, op=, axis=, rtype=, out=0x0)
#4 0x00007ffff0682aab in PyArray_Any (self=, axis=0, out=0x0) at numpy/core/src/multiarray/calculation.c:697
#5 0x00007ffff0682b5e in array_any (self=0xd6ae90, args=, kwds=) at numpy/core/src/multiarray/methods.c:1825
#6 0x00000035d10ddae6 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0
#7 0x00000035d10de312 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0
#8 0x00000035d10df4e9 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.6.so.1.0
#9 0x00000035d10dd897 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0
#10 0x00000035d10de312 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0
#11 0x00000035d10de312 in PyEval_EvalFrameEx () from /usr/lib64/libpython2.6.so.1.0
#12 0x00000035d10df4e9 in PyEval_EvalCodeEx () from /usr/lib64/libpython2.6.so.1.0
#13 0x00000035d10df5b2 in PyEval_EvalCode () from /usr/lib64/libpython2.6.so.1.0
#14 0x00000035d10fa52c in ?? () from /usr/lib64/libpython2.6.so.1.0
#15 0x00000035d10fa600 in PyRun_FileExFlags () from /usr/lib64/libpython2.6.so.1.0
#16 0x00000035d10fb9dc in PyRun_SimpleFileExFlags () from /usr/lib64/libpython2.6.so.1.0
#17 0x00000035d110807d in Py_Main () from /usr/lib64/libpython2.6.so.1.0
#18 0x00000034b341ec5d in __libc_start_main () from /lib64/libc.so.6
#19 0x0000000000400649 in _start ()
The text was updated successfully, but these errors were encountered: