Thanks to visit codestin.com
Credit goes to github.com

Skip to content

gh-135641: Fix flaky test_capi.test_lock_two_threads test case #135642

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 18, 2025
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion Modules/_testinternalcapi/test_lock.c
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,10 @@ lock_thread(void *arg)
_Py_atomic_store_int(&test_data->started, 1);

PyMutex_Lock(m);
assert(m->_bits == 1);
// gh-135641: in rare cases the lock may still have `_Py_HAS_PARKED` set
// (m->_bits == 3) due to bucket collisions in the parking lot hash table
// between this mutex and the `test_data.done` event.
Comment on lines +60 to +62
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any consequences of this being set incorrectly? I think another solution, though much more invasive, might be to move num_waiters from Bucket onto wait_entry.

Copy link
Contributor Author

@colesbury colesbury Jun 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there any consequences of this being set incorrectly?

The unlock will just take a slower path. The approximate (conservative) setting of _Py_HAS_PARKED is taken from WebKit, so I'm not too worried about it. It's just easy to forget about when writing these sorts of tests.

I think another solution, though much more invasive, might be to move num_waiters from Bucket onto wait_entry.

You can do this nicely and efficiently if you restructure the buckets into a tree of linked lists, which is something we talked about when merging the original parking lot PR, but hasn't been a priority.

There's currently a potential O(N^2) behavior in parking lot due to a linear scan (where N=threads waiting on locks) that using a balanced tree would fix. That's not something I'd want to backport though.

Here's the equivalent fix in Go for their mutex implementation: golang/go@990124d

assert(m->_bits == 1 || m->_bits == 3);

PyMutex_Unlock(m);
assert(m->_bits == 0);
Expand Down
Loading