-
-
Notifications
You must be signed in to change notification settings - Fork 10.9k
ENH: half precision complex #14753
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It is currently a major project to add new types to core NumPy, so this sort of enhancement will wait on a redo of the type system. |
@charris |
Hi @mattip @charris I was told by @rgommers that the new dtype system will likely land in the upcoming 1.20. Do you think someone from the NumPy team will have the bandwidth to work on adding |
It does seem like something worthwhile to add. @seberg thoughts? |
It has been in the back of my mind since float16 was introduced. If there is now a use for complex32 I'm in favor of adding support. @leofang NumPy does its float16 computations in float32, converting back and forth, so it should not be too difficult to add. I assume the GPU versions are more direct. We could probably add an |
The new dtypes "landed" but it is still very limited. Most importantly ufuncs still need a complete revamp and clean up, which I think will happen in the next months (NEP 43). Chuck is right, you can add a new Other than ufuncs, I have to clean out promotion. It slowed me down a bit at the end of last year, since NumPy promotion is a bit broken and I want to find the least annoying way to keep supporting it without special cases all over the place (most likely, that isn't possible...). To be clear, there may be trickier things to deal with also since the scalar code is pretty tricky in itself and you probably would want to make the complex32 scalar use the same implementation as the other ones. In any case, if I was now asked whether we can add float16, I would pause and ask to consider writing it externally first, since I doubt it is used all that much (or at least only in specialized code, for storage or gpu related). But we already have float16, so I am not opposed. |
@mattip @charris @seberg Thank you all for quick response!
With a preliminary test I think our GPU ufuncs can already do the same thing (converting to/from
Right, having an
I am not sure I understand these parts @seberg. Are you having In CuPy we don't have scalars; all scalars in NumPy are 0-d arrays in CuPy. I hope this could eliminate some complexities? (Plus, the actual type casting is done on GPU, so really we just need an
Well I thought you'd need special CPU instructions to support In fact for CuPy's purpose I suspect it's enough if there is a "storage-only" dtype for |
@leofang For the ufuncs, if your ufunc currently supports Yes no scalars might eliminate some complexities, but at least to me it would seem a bit strange if Using |
Sorry @seberg that I dropped the ball...
Am I understanding it correctly that these are the two facets of the same problem, so if we fix one we fix them all? What if we make
Not sure if I correctly get the different behavior that you referred to, but given that the actual computation performed on a |
Yes, you could hack around the ufunc behaviour by not allowing safe casts that are normally around (as well as promotions). NumPy currently uses "safe casting" semantics for promotion (in ufuncs mostly), which in my current opinion mixes two concepts that do not really quite fit. My hesitation is that Also, you could argue that complex32 will be just as odd as |
Thanks for the evaluation, Sebastian. To me it's fine that both As for Finally, it seems our discussion is converging. What does it take to proceed from here? Should I bomb the mailing list? Does it require a NEP? I might not be able to contribute to the actual code, but I'd be happy to do some logistics (if any). Thanks. |
In most of the processors (except some of the really recent ones), they don't support the half-precision floating-point for computation. They allow for storage purposes, but if a computation is needed, the half-float is first converted to a full-float, the computation is done and then the results are stored as half-precision. I was curious how NumPy handles this restriction. Is the numpy.half data type really a half type (in that case how to handle the processor and language incompatibility) or is it simply the above approach? Thanks |
NumPy does float16 computations using float32. There is no hardware/compiler support used. |
Hi @seberg I'd like to follow up and see how much closer we are / how much work is needed to get |
@leofang no, and it is not on anyones roadmap. |
Seconding this issue! Having native support for |
The last discussions in this issue date back to about four years ago. I'd guess that the many changes made in the data type system since then, including release 2.x, might have changed the landscape? Any comments on the feasibility of this? |
One possibility would be to attempt to implement this in |
Some binary file formats use half precision complex numbers (float16 for the real and imaginary parts)
Since numpy already supports half precision floats wouldn't it be possible to support the complex counterpart?
The text was updated successfully, but these errors were encountered: