Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

MyUserNameWasTakenLinux
Copy link
Contributor

This pull request addresses #26701. Converting to a np.clongdouble from a string would result in a loss of precision. This is because the conversion process involves Python's complex type as an intermediate, which has less precision than np.clongdouble. The pull request adds a test which fails on the main branch on linux, x86.

@MyUserNameWasTakenLinux MyUserNameWasTakenLinux marked this pull request as draft September 12, 2025 06:20
@ngoldbaum
Copy link
Member

Neat!

Just a head's up - longdouble is a bit of an unloved feature. That said, this seems reasonable to support.

Some suggestions:

  • I doubt the test you added exercises all the code paths in the code you added. Can you try to make sure your tests cover all the new code you added? Error cases are often poorly tested, for example.
  • While we're at it, we could fix the casts between the string DTypes and the longdouble dtype - IIRC the casts have the same problem.

I wonder if the linalg test that is failing is relying on the old broken conversion.

Ping @SwayamInSync - you might be interested in this. Swayam has been working on a quad precision DType we're hoping will ultimately allow us to deprecate longdouble support (or at least the float128 alias) in NumPy proper. The fact that longdouble has a subtly different meaning under different architectures is a constant source of bug reports and most users won't need it at all.

@WarrenWeckesser
Copy link
Member

Not a full review, but here are a couple issues that should be easy to fix:

  • This will not handle an input such as '0.5j' correctly; it returns 0.5+0j.
  • It incorrectly accepts an input such as '0.5 plate of shrimp'. It prematurely returns successfully in that case.

@WarrenWeckesser
Copy link
Member

Again not a full review (sorry), but it looks like the failures are because the new CLONGDOUBLE_setitem() is not correctly handling all its possible inputs. It might be given, for example, a Python float, a Python complex, a NumPy complex64, a NumPy complex128, a NumPy float64, a NumPy longdouble, etc. I don't know the NumPy C API well enough off the top of my head to quickly suggest the minimal set of type checks and conversion functions that would handle all the cases that will be handed to CLONGDOUBLE_setitem().

Copy link

@benburrill benburrill Sep 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should allow parentheses (as CPython, and hence np.cdouble does). Python likes to put parentheses in complex reprs, so I think it is important to handle this case at least, even if you're dropping support for some of the more esoteric bits of the CPython parser, like 1-j and 1_000.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The parser can handle parentheses now, though I'm not sure if dropping support for 1-j or 1_000 would be okay.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the case of 1_000, although it's technically a regression, np.longdouble doesn't support underscores either, so I think it's perfectly reasonable for np.clongdouble not to support them anymore. As for 1-j, CPython says it's for backwards compatibility and may be removed in the future. So in my opinion, probably fine to drop as well. But I do think it would be good to get the opinion of a numpy maintainer.

@MyUserNameWasTakenLinux MyUserNameWasTakenLinux marked this pull request as ready for review September 14, 2025 23:04
@MyUserNameWasTakenLinux
Copy link
Contributor Author

Currently, I'm relying on Python's complex type to deal with Python types as input to CLONGDOUBLE_setitem(PyObject *op). But this causes a loss in precision when converting from a Python int to np.clongdouble. I tried to use npy_longdouble_from_PyLong to handle the case of converting from a Python int to np.clongdouble, but that change causes test_nep50_weak_integers_with_inexact to fail.

@benburrill
Copy link

benburrill commented Sep 15, 2025

That test specifically excludes longdouble (notice it tests "dDG", but not "g"), which has the same int->string issue, so it would probably fine to just change the test to exclude "G" as well. That's not an ideal solution, I think the test should be rewritten so that if npy_longdouble_from_PyLong is changed in the future to go directly from int->longdouble, the test should either test what it was supposed to test, or xpass or something, but doing the same thing for clongdouble as is done for longdouble seems fine to me.

It's kinda funny because IIRC running the numpy tests is what was used to determine how small of a limit they could get away with for integer string conversion, and yet the chosen limit is still is a problem even for the numpy tests...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants