-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[1.21] utils/RunUnderSystemdScope: fix wrt channel deadlock #5914
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.21] utils/RunUnderSystemdScope: fix wrt channel deadlock #5914
Conversation
As seen in [1], sometimes coreos/go-systemd/dbus package deadlocks: the jobCompete is stuck trying to send job result string to the channel while holding the jobListener lock, while startJob (called by StartTransientUnit) waits for the same lock. Alas, it is not clear why the channel is not being read, nor was I able to reproduce it locally. Make the job result channel buffered, so jobJistener won't block on channel send and thus StartTransientUnit won't be stuck either. While at it, - move the error wrapping out of mgr.RetryOnDisconnect function, and use fmt.Errorf with %w instead of obsoleted errors.Wrap; - improve error messages, printing the systemd unit name (so we can check it in systemd log); - do check the job result string -- in case it is not "done", return an error back to the caller, which should help avoid other issues down the line. [1] https://bugzilla.redhat.com/show_bug.cgi?id=2082344 Signed-off-by: Kir Kolyshkin <[email protected]>
Codecov Report
@@ Coverage Diff @@
## release-1.21 #5914 +/- ##
================================================
- Coverage 44.97% 44.97% -0.01%
================================================
Files 109 109
Lines 11063 11065 +2
================================================
Hits 4976 4976
- Misses 5608 5610 +2
Partials 479 479 |
| if s != "done" { | ||
| return fmt.Errorf("error moving conmon with pid %d to systemd unit %s: got %s", pid, unitName, s) | ||
| } | ||
| case <-time.After(time.Minute * 6): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find myself wondering if we still need this case. Ideally, the buffered channel would fix the deadlock long term. From the sounds of it, having the request timeout just caused a less obvious deadlock
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe we do. The buffered channel fixes the deadlock, but if we haven't received a reply from systemd we should still say so and return an error.
|
lint job fails since it uses an old version of golangci-lint but a new version of golang (IOW release-1.21 branch needs some love). |
|
/retest LGTM, @cri-o/cri-o-maintainers PTAL |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: haircommander, kolyshkin The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Being fixed in #5918 |
|
@kolyshkin I think we should get this into 'main' first. Can we give the customer a test build to try out? |
Does it matter if we do the backport or forward port?
Yes, that would be a good thing to do, despite the lack of clear repro. |
let's start with main and pull back sequentially, so we order when they merge by inverse relative stability (1.21 should be very stable, 1.24 should be stable) |
|
Forward-port to main branch: #5922 |
Meaning I can still do a forward-port, but the order of merging is important, right? Here's one for main: #5922 |
|
LGTM |
|
/retest |
|
A friendly reminder that this PR had no activity for 30 days. |
|
/retest |
|
/lgtm |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
25 similar comments
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/retest-required Please review the full test history for this PR and help us cut down flakes. |
|
/override ci/openshift-jenkins/e2e_rhel |
|
@haircommander: Overrode contexts on behalf of haircommander: ci/openshift-jenkins/e2e_rhel DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
As seen in [1], sometimes coreos/go-systemd/dbus package deadlocks: the
jobCompete is stuck trying to send job result string to the channel
while holding the jobListener lock, while startJob (called by
StartTransientUnit) waits for the same lock.
Alas, it is not clear why the channel is not being read, nor was I able
to reproduce it locally.
Make the job result channel buffered, so jobJistener won't block on
channel send and thus StartTransientUnit won't be stuck either.
While at it,
move the error wrapping out of mgr.RetryOnDisconnect function,
and use fmt.Errorf with %w instead of obsoleted errors.Wrap;
improve error messages, printing the systemd unit name (so we can
check it in systemd log);
do check the job result string -- in case it is not "done",
return an error back to the caller, which should help avoid other
issues down the line.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=2082344