-
Notifications
You must be signed in to change notification settings - Fork 41.4k
Add jitter to lease controller #101652
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add jitter to lease controller #101652
Conversation
return | ||
} | ||
wait.Until(c.sync, c.renewInterval, stopCh) | ||
wait.JitterUntil(c.sync, c.renewInterval, 0.04, true, stopCh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The alternative would be to use:
wait.NonSlidingUntil
[so that we don't move forward requests on spikes]
I don't have strong opinion either way, but we should ensure that we won't violate any assumptions:
- Kubelet can definitely tolerate this
- IIUC api-server should also be fine with this. @lavalamp - can you confirm?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was considering it, but preferred to stick to sliding. Assuming that function doesn't finish within 10s we could potentially call this function again, which might cause some race-condition. I am not the expert of this part of the code, but if non-sliding is safe here I am happy to change it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think using NonSlidingUntil will cause any race-condition. Rather, if the function takes > 10s, the next attempt will run immediately.
I think it's better to keep sliding:
- if this function took long, but eventually succeeded, then it makes sense to wait the next 10s before we refresh the lease (it's fresh when the function ends)
- if this function took long, but hasn't succeeded, it will technically make sense to retry, but I'm afraid that this may generate higher load on unhealthy master (instead of nodes refreshing statuses every 10s, we will have all nodes that are continuously inside of c.sync; the number of requests will depend on the actual backoff strategy there, but it will be still > 1qps/10s as we have with sliding).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took a deeper look into kube-apiserver case and it is fine - even much larger jittering would work there.
The argument about retries for non-sliding case makes sense to me. So let's keep what we have now.
/kind bug |
/remove-kind feature |
/retest |
1 similar comment
/retest |
/triage accept |
@caesarxuchao: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/triage accepted |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: marseel, wojtek-t The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
…652-upstream-release-1.21 Automated cherry pick of #101652: Add jitter to lease controller
…652-upstream-release-1.20 Automated cherry pick of #101652: Add jitter to lease controller
What type of PR is this?
/kind bug
What this PR does / why we need it:
Since P&F was introduced, we saw significant degradation of latency api calls. P&F was not the root cause, it was just contributing factor.
Currently, node leases are updated every 10s + latency call, which caused spikes to have slightly higher latency than api calls that were well distributed within 10s window. This causes spikes to move relatively within 10s window and accumulate other well distributed calls.
I've prepared metric that for each 100ms counts number of puts of leases and computes standard deviation for 10s window.
With default configuration, for 5k cluster, standard deviation was 10x bigger compared to build with this patch.
Also, number of updates of leases per 100 ms varied from 0 to 300 updates without this change, and with this change it was between 40 and 60.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: