Thanks to visit codestin.com
Credit goes to github.com

Skip to content

TYP: Inconsistent behavior when assigning arrays of a scalar type to arrays using the scalar's alias #29151

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
MilanStaffehl opened this issue Jun 9, 2025 · 5 comments · Fixed by #29155

Comments

@MilanStaffehl
Copy link

Describe the issue:

There exists an inconsistency in numpy 2.3 when assigning an array typed to have a scalar type such as np.double to an array variable typed to have its matching alias such as np.float64 as dtype: For all types except np.float64 and np.complex128, the assignment passes type checking with mypy, but fails for the other two. (The example below is limited to the floating point numbers, but similar things happen for complex floating numbers.)

This also happens when replacing the scalar type, e.g. when replacing np.double with np.floating[np._typing._nbit_base._64Bit], again with only assignment to float64 failing and all others passing.

From reading some of the recent issues, it was my impression that the assignment of np.floating to any more concrete dtype should be forbidden, but this seems to only be the case for the aforementioned two types. In any case, I would expect consistent behavior across all three examples. Am I missing a crucial detail here? Or is this an artifact of the changes to np.float64 and np.complex128 introduced in recent versions?

Reproduce the code example:

import numpy as np
import numpy.typing as npt


x_1: npt.NDArray[np.double]= np.array([1], dtype=np.double)
reveal_type(x_1)
y_1: npt.NDArray[np.float64] = x_1

x_2: npt.NDArray[np.half] = np.array([1], dtype=np.half)
reveal_type(x_2)
y_2: npt.NDArray[np.float16] = x_2

x_3: npt.NDArray[np.single] = np.array([1], dtype=np.single)
reveal_type(x_3)
y_3: npt.NDArray[np.float32] = x_3

Error message:

shape2.py:6: note: Revealed type is "numpy.ndarray[builtins.tuple[Any, ...], numpy.dtype[numpy.floating[numpy._typing._nbit_base._64Bit]]]"
shape2.py:7: error: Incompatible types in assignment (expression has type "ndarray[tuple[Any, ...], dtype[floating[_64Bit]]]", variable has type "ndarray[tuple[Any, ...], dtype[float64]]")  [assignment]
shape2.py:10: note: Revealed type is "numpy.ndarray[builtins.tuple[Any, ...], numpy.dtype[numpy.floating[numpy._typing._nbit_base._16Bit]]]"
shape2.py:14: note: Revealed type is "numpy.ndarray[builtins.tuple[Any, ...], numpy.dtype[numpy.floating[numpy._typing._nbit_base._32Bit]]]"
Found 1 error in 1 file (checked 1 source file)

Python and NumPy Versions:

2.3.0
3.12.11 | packaged by conda-forge | (main, Jun 4 2025, 14:29:09) [MSC v.1943 64 bit (AMD64)]

Type-checker version and settings:

mypy 1.16.0

Command used: mypy --strict shape2.py

Additional typing packages.

mypy_extensions 1.1.0, typing_extensions 4.14.0, typing-inspection 0.4.1, annotated_types 0.7.0

No stub packages.

@MilanStaffehl
Copy link
Author

From a cursory look, it seems the problem also exists in numpy 2.2.

@jorenham
Copy link
Member

jorenham commented Jun 9, 2025

The difference between np.float64 and np.double is semi-intentional. As you correctly spotted, np.double is currently a broader type than np.float64, so float64 can be assigned to double, but not the other way around. And even though it's not documented anywhere (and it shouldn't), this difference be exploited to work around the now-deprecated mess that's caused by numpy.typing.NBitBase:

import numpy as np
import numpy.typing as npt

def pos_naive(x: np.float64) -> np.float64:
    return +x

def pos_workaround(x: np.double) -> np.float64:
    return +x  # probably needs a cast or type: ignore

def pos_evil_dont_use_this_pls[T: npt.NBitBase](x: floating[T]) -> floating[T]:
    return +x


y = pos_evil_dont_use_this_pls(np.float64(42))
reveal_type(y)  # floating[_64Bit] 
                # ^-- yikes

pos_naive(y)       # STOP! You've violated the law! Pay the court a fine or serve your sentence.
pos_workaround(y)  # Everything's in order.

So np.double is a bit of a double-edged sword: It can be useful in certain situations when used defensively (i.e. in input positions), but could lead to issues if you use it offensively (in output positions), in which case you should always use np.float64 instead.

So in a perfect word, np.double would be exactly equivalent to np.float64. But until that time, there are still use-cases for it.

But to be honest, I'm a bit worried that the cure is worse than the disease, since its leaflet probably isn't obvious to someone that isn't a full-time stub developer. So what do you think we should do here?

@MilanStaffehl
Copy link
Author

So what do you think we should do here?

Personally, just from my very limited and perhaps naive perspective, I would prefer eventual consistency across all these aliases, i.e. having np.double and np.float64 be perfectly equivalent. Firstly, because then all scalar types neatly behave the same way and the old special treatment for np.float64 is finally ancient history. And secondly, I find this compelling since the documentation already claims that these two are mere aliases of one another. The fact that they then do not behave as such is confusing. The role that np.double currently fills would then perhaps best be filled by a new/different scalar type - if at all! As you say:

And even though it's not documented anywhere (and it shouldn't), this difference be exploited to work around the now-deprecated mess that's caused by numpy.typing.NBitBase

I take it you mean that this kind of workaround is not officially encouraged. So I don't see much reason for a wider np.double to exist purely for this purpose. For example, in the code you show, to build an equivalent signature for pos_evil_dont_use_this_pls without NBitBase, I would opt for a solution using overloads - which I believe you also suggest in the docs. That is future-proof, more explicit, and perfectly equivalent. So the continued existence of a slightly wider np.double for this kind of purpose seems superfluous to me, and inviting use in ways that are not encouraged.

However, I also understand that this could be considered quite an impactful change, and that it will probably break some people's existing code (https://xkcd.com/1172/). My opinion here is very much informed by preferentially using very explicit scalar types and doing so on small projects that are easily adapted to changes. I recognize not everyone has that luxury.

All that is to say, I would prefer one of two options (or both, one after the other):

  1. Document the discrepancy in some careful way so no one is surprised when assigning np.double to np.float64 does not work, but no one gets any funny ideas either.
  2. Make np.double and np.float64 equivalent (eventually).

@jorenham
Copy link
Member

jorenham commented Jun 9, 2025

However, I also understand that this could be considered quite an impactful change, and that it will probably break some people's existing code (xkcd.com/1172)

I'll create a quick PR for this, so we can see what mypy_primer has to see about it.

@jorenham
Copy link
Member

jorenham commented Jun 9, 2025

Oh and for what it's worth, in numtype they're already bidirectionally equivalent mutual aliases of each other in both ways (and vice-versa).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants