Thanks to visit codestin.com
Credit goes to github.com

Skip to content

ENH: Add locking to umath_linalg if no lapack is detected at build time #26750

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 27, 2024

Conversation

ngoldbaum
Copy link
Member

@ngoldbaum ngoldbaum commented Jun 18, 2024

Closes #22509

This is a re-do of #26687 following the suggestion in #26687 (comment).

This adds locking macros around all the calls into the low-level LAPACK code in umath_linalg.cpp. The macros are either no-ops or calls to lock or unlock a global mutex declared statically in the _umath_linalg module and initialized during that modules' initialization. The mutex is only used if we do not detect a LAPACK during the build, indicating that lapack_lite is being used.

The test I added randomly fails or segfaults on main if you force numpy to build using lapack_lite.

Happy to add more extensive thread safety testing if there's a desire for it.

I think I got all the low-level calls but please let me know if you think I missed something.

@ngoldbaum ngoldbaum added the 39 - free-threading PRs and issues related to support for free-threading CPython (a.k.a. no-GIL, PEP 703) label Jun 18, 2024
@ngoldbaum
Copy link
Member Author

This needs a release note I think since lapack_lite has been thread-unsafe since forever. Also all the work I’m doing toward improving and documenting thread safety probably needs a release note. Just commenting to remind myself to open a followup issue about a release note for free-threaded support and a release note specific to this, since this thread safety issue is a problem even with the GIL.

Copy link
Member

@seberg seberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice straight forward! I happy to not test every function here, that might be nice but rather repetitive anyway.
One small comment mostly, I think we should use have_lapack (even if it may not matter in practice)?

Was considering if there are other blas/lapacks that are not thread-safe. But I think renaming the lock is a bridge we can cross when that happens.

Copy link
Member

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM modulo two quite minor comments. Thanks @ngoldbaum

@ngoldbaum ngoldbaum force-pushed the lapacklite-locking branch from ebdeb4d to 79bd9cc Compare June 19, 2024 20:47
@ngoldbaum ngoldbaum changed the title ENH: Add locking to umath_linalg if no blas is detected at build time ENH: Add locking to umath_linalg if no lapack is detected at build time Jun 19, 2024
Copy link
Member

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Nathan. One more little bug in one of the last commits, plus two tiny suggestions for the release note

@rgommers rgommers added this to the 2.1.0 release milestone Jun 27, 2024
Copy link
Member

@rgommers rgommers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now, in it goes. Thanks Nathan! And thanks for the review Sebastian.

@rgommers rgommers merged commit 128d1ae into numpy:main Jun 27, 2024
68 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
01 - Enhancement 39 - free-threading PRs and issues related to support for free-threading CPython (a.k.a. no-GIL, PEP 703)
Projects
None yet
Development

Successfully merging this pull request may close these issues.

BUG: Lapack lite is not thread-safe (need to guard)
3 participants