ENH: use more fine-grained critical sections in array coercion internals (#30514)#30620
Merged
charris merged 1 commit intonumpy:maintenance/2.4.xfrom Jan 9, 2026
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Backport of #30514.
Towards addressing #30494. Right now the critical section I remove here in
PyArray_FromAny_intshows up in the profile for the script in that issue.I added the critical section in 5a031d9 and this partially reverts that change. On reflection, it's not a good idea to introduce a scaling bottleneck here in service of a sort of wonky thing to do: mutating the operand of
np.array()while the array is being created.Instead, we should error in those cases. Like we already do without the critical section!
I also needed to add new, more fine-grained critical sections, in
PyArray_DiscoverDTypeAndShape_RecursiveandPyArray_AssignFromCache_Recursiveto avoid data races due to use of thePySequence_FastAPI.I also updated the tests for this to allow the cases affected by this to raise errors instead of succeeding, since that success relies on introducing scaling bottlenecks for valid read-only uses.
Here's a Samply profile output run using this PR on the script from #30494 - I no longer see a scaling bottleneck inside the array coercion routines: https://share.firefox.dev/44KLZxs