-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Monitor: Reap all processes #15676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monitor: Reap all processes #15676
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
Could it be a fix for #15373? |
|
Happy to see the additional logs and artifacts ended up being useful. /lgtm |
| if wpid == 0 { | ||
| log.Log.Infof("No more processes to be repead") | ||
| break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe we should add a debug log in case wpid < 0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tbh, I think when syscall.Wait4 returns -1 it will return it with err set - and that is already logged...
I think the new loop already eliminates any possible race since it reaps all the waiting child processes in one go, even if only one SIGCHLD was sent. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: vladikr The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/hold |
|
Required labels detected, running phase 2 presubmits: |
I'm not completely sure, but I think the race happens when multiple child processes finish before the first signal is received. Since signals are just flags and aren't queued, we only get one signal, even if more than one child has exited. That could explain the issue, though it's a rare case. |
Delivery of signals can actually drop signal if multiple signals are generated while the signal is being blocked. In this case only one signal is delivered after it is un-blocked. Because we always do Wait4 per signal we can actually miss a process being terminated. Sometimes the ordering happen to be unfortunate and the virt-launcher process is missed and never cleaned up. This cause for the virt-launcher to hang around indefinitely. Therefore this commit tries to reap as much processes as possible per signal. Signed-off-by: Luboslav Pivarc <[email protected]>
Yes, this is exactly what is happening but the think I describe is the abstraction that Go does. It matters if the signal is processed first and the sig is send to channel or the sig can be send to channel before the signal was processed. In the latter case we can end the loop, new child dies and we miss it. Anyway I did small test Without the loop, we missed 1-7 signals. With the loop I couldn't reproduce, so I am pretty sure this helps but still not confident that this is completely fixing the issue. |
| exitStatus <- wstatus.ExitStatus() | ||
| for { | ||
| var wstatus syscall.WaitStatus | ||
| wpid, err := syscall.Wait4(-1, &wstatus, syscall.WNOHANG, nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vladikr maybe we can actually run loop that will cleanup children regardless the signal
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
on exit, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the time as the syscall should block if nothing happens. But I think as is this is good enough
|
big thanks @xpivarc |
|
@fossedihelm @vladikr PTAL |
|
/lgtm |
|
Required labels detected, running phase 2 presubmits: |
|
/unhold |
|
/retest-required |
|
@xpivarc: The following tests failed, say
DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
/cherry-pick release-1.6 release-1.5 release-1.4 release-1.3 release-1.2 release-1.1 release-1.0 |
|
@xpivarc: new pull request created: #15816 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What this PR does
Delivery of signals can actually drop signal
if multiple signals are generated while the signal is being blocked.
In this case only one signal is delivered after it is un-blocked. Because we always do Wait4 per signal we can actually miss a process being terminated.
Sometimes the ordering happen to be unfortunate and the virt-launcher process is missed and never cleaned up.
This cause for the virt-launcher to hang around indefinitely.
Therefore this commit tries to reap as much processes as possible per signal.
This was observed upon successful migration where both source and target Pods continued to be running, see:
and logs:
Links to places where the discussion took place:
Special notes for your reviewer
It is not clear to me if the go runtime suspend the signal (makes it non blocking) before the signal is send to channel and so there is no race.
Release note