-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[1.20] ResourceStore: fix segfault and update tests #4534
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.20] ResourceStore: fix segfault and update tests #4534
Conversation
Signed-off-by: Urvashi Mohnani <[email protected]>
[1.20] Point to k8s release-1.20 branch for tests
so we don't leak a goroutine Signed-off-by: Peter Hunt <[email protected]>
…pick-4412-to-release-1.20 [release-1.20] image pull: close progress chan
Signed-off-by: Urvashi Mohnani <[email protected]>
Signed-off-by: Skyler Clark <[email protected]>
This saves some time (a few seconds per test at least) as we avoid running setup_test (and cleanup_test has nothing to clean up), and removes some code duplication. Signed-off-by: Kir Kolyshkin <[email protected]>
...and add a status check to one case where we use run, to make it more obvious that `run` is really needed here. Signed-off-by: Kir Kolyshkin <[email protected]>
Using /dev/loop-control is problematic since it is not supposed to be read from or written to. Use /dev/kmsg, and actually enable the write test. Signed-off-by: Kir Kolyshkin <[email protected]>
…pick-4402-to-release-1.20 [release-1.20] convert shmsize annotation to handler_allowed
…pick-4369-to-release-1.20 [release-1.20] test/devices.bats: nits
[1.20] Bump version to v1.20.0-rc.1
which changed after a k8s release-notes package bump Signed-off-by: Peter Hunt <[email protected]>
This test case is sometimes fails like this: > not ok 40 ctr execsync > # (in test file ./ctr.bats, line 429) > # `[[ "$output" == *"command timed out"* ]]' failed > ..... > # time="2020-12-02T23:28:58Z" level=fatal msg="connect: connect endpoint 'unix:///tmp/tmp.ncaFI2tZzn/crio.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded" This happens because our CI might be slow and this 1 second timeout also applies to connect. Increase the timeout and the sleep accordingly to fix the flake. Signed-off-by: Kir Kolyshkin <[email protected]>
[1.20] release-notes: fix flags
…pick-4411-to-release-1.20 [release-1.20] test/ctr.bats: fix a "ctr execsync" flake
ResourceCache is a structure that keeps track of partially created Pods and Containers. Its features include: - tracking pods and containers after their initial creation times out - automatic garbage collection (after a timer) Signed-off-by: Peter Hunt <[email protected]>
Signed-off-by: Peter Hunt <[email protected]>
Before when a client's request for a RunPodSandbox or ContainerCreate timed out, CRI-O would clean up the resource. However, these requests usually fail when the node is under load. In these cases, it would be better to hold onto the progress, not get rid of it. This commit uses the previously created ResourceCache to cache the progress of a container creation and sandbox run. When a duplicate name is detected, before erroring, the server checks in the ResourceCache to see if we've already successfully created that resource. If so, we return it as if we'd just created it. It also moves the SetCreated call to after the resource is deemed as not having timed out. Hopefully, this reduces the load on already overloaded nodes. Signed-off-by: Peter Hunt <[email protected]>
Even if we use the resource cache as is, the user is still bombarded with messages saying the name is reserved. This is bad UX, and we're capable of improving it. Add watcher idiom to resource cache, allowing a handler routine of RunPodSandbox or CreateContainer to wait for a resource to be available. Something that is key here is if the resource becomes available while we're watching for it, *we still need to error on this request* This is because we could get the resource from the cache, remove it (thus meaning it won't be cleaned up), and the kubelet's request could time out, and it could try again. This would cause us to leak a resource. This way, if we get into this situation, there needs to be three requests: first that times out second that discovers the resource is ready, but still errors third that actually retrives that resource and returns it. This will result in many fewer "name is reserved" errors (one every 2 seconds to one every 4 minutes) Signed-off-by: Peter Hunt <[email protected]>
Now that we plan on caching the results of a pod sandbox creation, we shouldn't short circut the network creation. In a perfect world, we'd give the CNI plugin unbounded time, which would allow us to reuse even the longest of CNI creation time. However, this leads to the chance that the CNI plugin runs forever, which is not ideal. Instead, give the sandbox network creation 5 minutes (a minute more than the full request), to improve the odds we have a completed sandbox that can be reused, rather than thrown away. Signed-off-by: Peter Hunt <[email protected]>
timeout.bats is a test suite that tests different scenerios regarding to timeouts in sandbox running and container creation. It requires a crictl that knows about the -T option Signed-off-by: Peter Hunt <[email protected]>
Older version of this code used to have a goroutine for each resource, which is no longer the case, so remove the obsoleted part of the doc. It is already described elsewhere how the resource is becoming stale and removed. Signed-off-by: Kir Kolyshkin <[email protected]>
Signed-off-by: Kir Kolyshkin <[email protected]>
The 10s timeout is not enough sometimes to finish container or pod creation. Increase to 30s to fix occasional flakes, and move to a separate function wait_crio. While at it, - increate conmon sleep and crictl create/runp cancel timeout to 3s; - move create_conmon to setup; - fix ID checks (we're looking for string, not substring); - change a 3m timeout to 150s. Not critical, just nits. Signed-off-by: Kir Kolyshkin <[email protected]>
Signed-off-by: Peter Hunt <[email protected]>
[1.20] improve timeout handling and fix flakes
Signed-off-by: Peter Hunt <[email protected]>
bump to v1.20.0
This is mostly to have containers/storage@83150e3 which should fix the authentication failure during bootstrap node install. BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1907770 Signed-off-by: Qi Wang <[email protected]>
Signed-off-by: Artyom Lukianov <[email protected]>
Signed-off-by: Artyom Lukianov <[email protected]>
Signed-off-by: Artyom Lukianov <[email protected]>
…ags-1.20 [1.20] Provide functionality to start infra containers on the specified set of CPUs
The `registries` option will be removed from CRI-O 1.21. To better communicate this upcoming change to users, add a deprecation warning when loading a config and another to the man page. Signed-off-by: Valentin Rothberg <[email protected]>
…ion-notice add `regitries` deprecation notice
Signed-off-by: Peter Hunt <[email protected]>
…pick-4468-to-release-1.20 [release-1.20] runtime_vm: set finished time when containers stop
Signed-off-by: Mrunal Patel <[email protected]>
Signed-off-by: Mrunal Patel <[email protected]>
[1.20] Support pprof profile over unix socket
before, it was possible to segfault when a WatcherForResource was called followed by a Get as we didn't check that the resource was actually put. Fix this Signed-off-by: Peter Hunt <[email protected]>
Signed-off-by: Peter Hunt <[email protected]>
We need to specifically register "Describe" functions, but ginkgo doesn't allow us to register multiple ones. Wrap different functionality in different Contexts so they all run. Signed-off-by: Peter Hunt <[email protected]>
Signed-off-by: Peter Hunt <[email protected]>
|
Thanks for your pull request. Before we can look at it, you'll need to add a 'DCO signoff' to your commits. 📝 Please follow instructions in the contributing guide to update your commits with the DCO Full details of the Developer Certificate of Origin can be found at developercertificate.org. The list of commits missing DCO signoff:
DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: haircommander The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@haircommander: PR needs rebase. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
@haircommander: The following test failed, say
DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
@haircommander: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
What type of PR is this?
What this PR does / why we need it:
There is a chance that a WatcherForResource() followed by a Get() would cause cri-o to segfault. Fix this
also extend the test suite to add a test for this and another situation, as well as move around the tests so they're all run
Which issue(s) this PR fixes:
Special notes for your reviewer:
cherry-pick of #4530
Does this PR introduce a user-facing change?