You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Passing a shape tuple to open_memmap that contains np.int64 instead of ints does not throw any errors and writes the array to disk without any issues, except that np.load fails to load it.
Specifically, the npy file will start with the following bytes (non-ascii chars removed):
and it seems np.load fails on this as it does a ast.literal_eval on this header and thus cannot deserialize the np.int64()'s.
While the open_memmap docs correctly states that shape should be a tuple of ints, I think that either this should be enforced by raising an error if the type is wrong, or they should be converted to simple ints which would allow loading. This might be an open_memmap problem exclusively, but it might make sense to allow np.load to read headers with np.integer types. At the moment the write succeeds while creating an unusable npy.
Currently solving this by using shape=tuple(np.atleast_1d(shape).tolist()) which casts the shape to primitive python types. Maybe something like this should be done internally?
Describe the issue:
Passing a shape tuple to open_memmap that contains np.int64 instead of ints does not throw any errors and writes the array to disk without any issues, except that
np.load
fails to load it.Specifically, the npy file will start with the following bytes (non-ascii chars removed):
as opposed to:
and it seems
np.load
fails on this as it does aast.literal_eval
on this header and thus cannot deserialize thenp.int64()
's.While the
open_memmap
docs correctly states that shape should be a tuple of ints, I think that either this should be enforced by raising an error if the type is wrong, or they should be converted to simple ints which would allow loading. This might be an open_memmap problem exclusively, but it might make sense to allow np.load to read headers with np.integer types. At the moment the write succeeds while creating an unusable npy.Reproduce the code example:
Error message:
Python and NumPy Versions:
Tested with python 3.9 and numpy 2.02 as well as python 3.12 and numpy 2.2.3
Runtime Environment:
[{'numpy_version': '2.0.2',
'python': '3.9.21 | packaged by conda-forge | (main, Dec 5 2024, '
'13:51:40) \n'
'[GCC 13.3.0]',
'uname': uname_result(system='Linux', node='fedora', release='6.12.11-200.fc41.x86_64', version='#1 SMP PREEMPT_DYNAMIC Fri Jan 24 04:59:58 UTC 2025', machine='x86_64')},
{'simd_extensions': {'baseline': ['SSE', 'SSE2', 'SSE3'],
'found': ['SSSE3',
'SSE41',
'POPCNT',
'SSE42',
'AVX',
'F16C',
'FMA3',
'AVX2',
'AVX512F',
'AVX512CD',
'AVX512_SKX',
'AVX512_CLX',
'AVX512_CNL',
'AVX512_ICL'],
'not_found': ['AVX512_KNL', 'AVX512_KNM']}},
{'architecture': 'Cooperlake',
'filepath': '/home/sjung/micromamba/envs/py39/lib/python3.9/site-packages/numpy.libs/libscipy_openblas64_-99b71e71.so',
'internal_api': 'openblas',
'num_threads': 24,
'prefix': 'libscipy_openblas',
'threading_layer': 'pthreads',
'user_api': 'blas',
'version': '0.3.27'}]
Context for the issue:
Silent failure causes unreadable npys to be created, which caused me data loss (or manual header re-write).
The text was updated successfully, but these errors were encountered: