Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

MyUserNameWasTakenLinux
Copy link
Contributor

This pull request addresses #26701. Converting to a np.clongdouble from a string would result in a loss of precision. This is because the conversion process involves Python's complex type as an intermediate, which has less precision than np.clongdouble. The pull request adds a test which fails on the main branch on linux, x86.

@MyUserNameWasTakenLinux MyUserNameWasTakenLinux marked this pull request as draft September 12, 2025 06:20
@ngoldbaum
Copy link
Member

Neat!

Just a head's up - longdouble is a bit of an unloved feature. That said, this seems reasonable to support.

Some suggestions:

  • I doubt the test you added exercises all the code paths in the code you added. Can you try to make sure your tests cover all the new code you added? Error cases are often poorly tested, for example.
  • While we're at it, we could fix the casts between the string DTypes and the longdouble dtype - IIRC the casts have the same problem.

I wonder if the linalg test that is failing is relying on the old broken conversion.

Ping @SwayamInSync - you might be interested in this. Swayam has been working on a quad precision DType we're hoping will ultimately allow us to deprecate longdouble support (or at least the float128 alias) in NumPy proper. The fact that longdouble has a subtly different meaning under different architectures is a constant source of bug reports and most users won't need it at all.

@WarrenWeckesser
Copy link
Member

Not a full review, but here are a couple issues that should be easy to fix:

  • This will not handle an input such as '0.5j' correctly; it returns 0.5+0j.
  • It incorrectly accepts an input such as '0.5 plate of shrimp'. It prematurely returns successfully in that case.

@WarrenWeckesser
Copy link
Member

Again not a full review (sorry), but it looks like the failures are because the new CLONGDOUBLE_setitem() is not correctly handling all its possible inputs. It might be given, for example, a Python float, a Python complex, a NumPy complex64, a NumPy complex128, a NumPy float64, a NumPy longdouble, etc. I don't know the NumPy C API well enough off the top of my head to quickly suggest the minimal set of type checks and conversion functions that would handle all the cases that will be handed to CLONGDOUBLE_setitem().


const char *p = end;
if (*p != '+' && *p != '-') {
if(*p == 'j') {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if(*p == 'j') {
if (*p == 'j' || *p == 'J') {

Copy link

@benburrill benburrill Sep 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably should allow parentheses (as CPython, and hence np.cdouble does). Python likes to put parentheses in complex reprs, so I think it is important to handle this case at least, even if you're dropping support for some of the more esoteric bits of the CPython parser, like 1-j and 1_000.

b = op;
Py_XINCREF(b);
}
s = PyBytes_AsString(b);
Copy link

@benburrill benburrill Sep 13, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Most of these type checks and conversions are redundant because string_to_long_cdouble is only ever passed bytes or unicode objects it seems. Why not just have string_to_long_cdouble(char *s)?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants